Mark CôtéLando demo

Lando is so close now that I can practically smell the tibanna. Israel put together a quick demo of Phabricator/BMO/Lando/hg running on his local system, which is only a few patches away from being a deployed reality.

One caveat: this demo uses Phabricator’s web UI to post the patch. We highly recommend using Arcanist, Phabricator’s command-line tool, to submit patches instead, mainly because it preserves all the relevant changeset metadata.

With that out of the way, fasten your cape and take a look:

The Mozilla BlogSnips Uses Rust to Build an Embedded Voice Assistant

The team at Paris-based Snips has created a voice assistant that can be embedded in a single device or used in a home network to control lights, thermostat, music, and more. You can build a home hub on a Raspberry Pi and ask it for a weather report, to play your favorite song, or to brew up a double espresso. Manufacturers like Keecker are adding Snips’ technology to products like multimedia home robots. And Snips works closely with leaders across the value chain, like NVIDIA, EBV, and Analog Devices, in order to voice-enable an increasingly wider range of device types, from speakers to home automation systems to cars.

Snips’ solution is different from other voice assistants, in that it runs its entire code base on a single device and can work without an Internet connection. Snips’ software stack includes its wake word (“Hey Snips”), application logic, speech recognition engine, and language understanding module.

By comparison, products like Amazon Echo and Google Home just run code for the wake word, and they are dependent on the cloud to process queries and generate responses. That approach opens the door for companies to potentially collect users’ speech data, raising privacy concerns.

How can Snips embed all the code for a voice assistant onto a single device? They wrote it using the Rust systems programming language.

Rust is a highly efficient programming language that was developed in an open source project and is sponsored by Mozilla. The first stable release of Rust was in May 2015. Now, the Rust community is seeing companies adopt Rust to build commercial software, often at the cutting edge of their fields.

Rust is compelling because it combines attributes from different kinds of languages, so it can offer high performance and low memory overhead as well as memory safety and cross-compilation to different platforms. That made it a great fit for Snips’ use case: embedding code into a range of device types with limited memory and processing power.

Why Rust?

Snips Principal Engineer Mathieu Poumeyrol had used Rust at a previous job, writing multi-platform code. Instead of having to write and then rewrite for each platform, he used Rust’s cross-compilation capability. That let him write once and translate his code so it could run well on different machines, without days or weeks of hand-coding rework.

Poumeyrol pushed hard for Snips to consider adopting Rust. It had the traits Snips needed – efficiency, portability, and safety – and it had the performance characteristics to be able to run wicked fast, even on small devices.

“Snips was already using very modern languages for both mobile development and the back end, like Swift, Kotlin, and Scala,” Poumeyrol said. “That played a big part in convincing our engineers to try Rust.”

After more investigation, the Snips technical team was convinced that Rust was the right way to go. “We went all-in on Rust in 2016,” said Snips CTO Joseph Dureau. “And we are very happy with that decision.”

Performance and Portability

The primary challenge for Snips’ engineering team was this: How can we embed a voice assistant so it runs efficiently and safely on all of our clients’ connected devices, regardless of the operating system and architecture they use?

Rust was the answer to that challenge. The language offered a combination of virtues: the performance of a low-level language like C/C++, the capability to port code to new platforms, and memory safety features designed to enhance security, even when code is running on connected devices that are relatively exposed. (See how crockpots were hacked in 2016.)

Performance: Rust code is fast and efficient. It can run on resource-constrained devices with no degradation in performance. The language manages zero-cost abstraction in the same spirit as C++, Poumeyrol said, while maintaining the same safety level as a language with garbage collection. Rust delivers high-level features without a runtime performance penalty, which was exactly what Snips needed.

Portability: Rust’s first-class compiler, rustc, allows Snips’ engineers to write code once and port it to new devices. This is critical, because the company adds new device platforms to its solution every few weeks. Under the hood, rustc is implemented on top of LLVM, a solid, proven compiler framework. LLVM enables programmers to cross-compile code to most any modern hardware architecture, from mobile devices to desktops and servers.

“We must be able to code once and run our code on many platforms in an optimal and secure way,” Dureau said. “Everything we write for the embedded voice assistant, we write in Rust.”

Safety: Rust has a unique ownership model that makes its code, once compiled, safer than C/C++ and easier to maintain over time. The language uses concepts of ownership, moves, and borrows to keep track of memory resources and make sure they are being used appropriately.

Here’s how Rust’s memory safety features work. After programmers write new code, they run it through a compiler. The rustc compiler checks the code for errors. If it finds code that does not handle memory resources correctly, the compile step will not complete. That makes it more difficult to put memory-unsafe code into a production environment. The compiler helps in another way: It gives some feedback about the error alerts, and when possible, suggests fixes. This feedback saves a lot of time and lets new programmers learn by doing, with a lowered risk of introducing security vulnerabilities.

Poumeyrol is a fan of the Rust compiler. “At compilation time, it can make sure the resource management is done correctly, so we have no surprises at runtime,” he said.

One Fast Development Cycle

Working in Rust, the Snips technical team was able to complete its voice platform in record time: It took Snips less than a year to complete the coding work in Rust and put its voice assistant into production.

Memory safety played a large role in accelerating Snips’ development process. The developers could find and fix bugs using feedback from the Rust compiler. Those early corrections made the development cycle much shorter, because it’s simpler to fix bugs early, rather than waiting until runtime. It also speeded up the QA (quality assurance) phase of the process, so Snips was able to move new features into production more quickly.

Snips’ solution currently supports a dozen different device platforms, including the Raspberry Pi 3, DragonBoard, Sagemcom, Jetson TX2, IMX.8M, and others. Rust has made it simpler for the team to extend support to new boards, because they can reuse the same code base rather than writing custom implementations for each architecture.

Learning Rust

Today, all Snips’ embedded code is written in Rust. Over time, Poumeyrol has trained the embedded software engineers to code in Rust, as well as a significant number of the company’s Machine Learning scientists. As they all got more familiar with the language, the team’s go-to reference was the second edition of The Rust Programming Language Book, published online by the open source Rust Project.

The whole training process was fairly quick and organic, Poumeyrol said. The engineers he trained in turn shared their expertise with others, until the entire embedded software engineering team was actively learning the language.

“Rust is a language of its time,” Poumeyrol said. “Once one has a taste for these modern languages, it can be very frustrating to come back to C or C++ when you suddenly need portability and efficiency.” Poumeyrol has seen broad adoption of Rust in the larger industry as well, as software engineers and machine learning scientists see it as a useful tool that can solve persistent coding problems.

The post Snips Uses Rust to Build an Embedded Voice Assistant appeared first on The Mozilla Blog.

Air MozillaThe Joy of Coding - Episode 129

The Joy of Coding - Episode 129 mconley livehacks on real Firefox bugs while thinking aloud.

Air MozillaWeekly SUMO Community Meeting, 21 Feb 2018

Weekly SUMO Community Meeting This is the SUMO weekly call

QMOFirefox 59 Beta 10 Testday Results

Hello everyone,

As you may already know, last Friday – February 16nd – we held a new Testday event, for Firefox 59 Beta 10.

Thank you Mohammed Adam, Abhishek Haridass,  Fahima Zulfath A. and  Surentharan.R.A. from  India QA Community team for helping us make Mozilla a better place.

And special thanks go to the Bangladesh QA Community team for making such a great job and participating in such a large number. These are all the participants: Syed Tanvir, Tanvir Mazharu, Sayed Ibn MAsud, Anika Alam, Anamul Hasan, Kazi Ashraf Hossain, Arif, Neaous sharif, Hasibul Hasan Shanto, Hasibul Hasan Abir, Wahid Mohammad Mahfuz, Syed Tanvir, Serajush Salekin, Saheda Reza Antora, Farha, Mehedi Hasan, Habibur Rahman Habib, Sontus Chandra Anik, Md. Zahedul Hossain, Labisa Reza, Foysal Ahmed, Md Solaman Sarif, Md. Rahimul Islam, Abu Sayeed Khan, Hasin Ishrak, Mim Ahmed Joy, Arman, Tanvir Rahman, Md.Jahid Hassan, Emon Ahmed, TIS Salehin, Md Tanvir Hossain, Sajedul Islam, Dibbendu Kumar Sarkar, Masum Billah Musa, Maruf Rahman and Mariya Akter.

Results:

– several test cases executed for Find Toolbar  and  Search suggestions features.

– bugs verified: 925275, 1426094, 1430391, 1228111, 1430773, 1115976, 989642, 1436110, 1420601, 1426920, 1436876, 1430143, 1429974, 1338497, 1435838,  1038695

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

David HumphreyWhat Happens when you Contribute, revisited

I sat down to write a post about my students' experiences this term contributing to open source, and apparently I've written this before (and almost exactly a year ago to the day!) The thing about teaching is that it's cyclic, so you'll have to forgive me as I give a similar lecture here today.

I'm teaching two classes on open source development right now, two sections in an introductory course, and another two in a follow-up intermediate course. The students are just starting to get some releases submitted, and I've been going through their blogs, pull requests, videos (apparently this generation likes making videos, which is something new for me), tweets, and the like. I learn a lot from my students, and I wanted to share some of what I'm seeing.

Most of what I'm going to say is aimed at maintainers and existing open source developers; I'm talking to myself as much as anyone. Because the students go out and work on real open source bugs in all different projects, anything can and does happen. I can't easily prepare them for the responses they'll encounter when they put themselves out there on the web: I see it all.

When you work on (i.e., specialize, focus on) a few projects, and within a particular community, it's easy to get lulled into normalizing all sorts of behaviour that may or may not be the most productive way to work. However, when you participate in a lot of different projects, you start to see that there are a many different approaches, each with varying degrees of positive and negative outcomes.

I wanted to lay before you the kinds of things I've seen this month. I'm not going to link to anything or anyone in particular. Rather, I want to show you the variety of possible scenarios when someone contributes to an open source project. For the most part, these students are at a similar level (i.e., undergraduate CS) and doing similar sorts of patches: first contribution to a project of small doc/test/code fixes, ~100 or fewer lines of code.

  • ...crickets.... It's really common for PRs to just get completely ignored. Sometimes this happens because students contribute to dead or dying projects, other times the maintainers just can't be bothered. I try to intervene before this happens, but even I can't always predict how submissions will be received (I had a big PR get rejected like this recently). Looks can be deceiving, and a lot of projects look more active than they are. I deal with this by marking the content/process of a change vs. its reception/outcome.

  • "LGTM, merged". A lot of times the code is correct, the expected process has been followed, and the PR is merged as is. There isn't a lot of criticism, but there also isn't a lot of fanfare. It's likely the 100th PR this maintainer has merged in recent weeks, and it's just another day at the office. This is pretty typical. I think it's OK, but it misses the chance to further engage someone new in your community. I wish GitHub did more to signal in a PR that this is someone's first work. That said, I think most maintainers know when a new person arrives on the scene. Take a minute to welcome them, say "thank you", and if you're smart, point them at follow up bugs to fix.

  • "I don't want this change, closing". I've seen a bunch of this recently. Students are often surprised by this, because there was a bug filed, and it looked like the change/feature was wanted. However, I'm not surprised, because most projects don't triage their bugs anymore. It's becoming increasingly difficult to look at the issues in a project and know whether or not you should tackle them. I have sympathy for both sides. I'm trying to teach students to communicate in bugs before they start working. Some do; a lot feel intimidated to expose themselves until they are sure they can fix the bug, so they start with the fix before announcing their presence. Even if you ask in a bug, a lot of projects won't respond to questions, only PRs, so it's a catch-22.

  • "Thanks, but we need to do this differently". Here you have a PR that "fixes" a bug, and a maintainer that wants it done another way. I would say that this is the expected result of a PR in 95% of cases--you always need review to catch things. However, at this point there are a few scenarios that can happen, not all of them desirable:

  1. review comments in the PR lead to follow-up work by the student in order to create the desired code
  2. the maintainer sends a PR to the student's repo/branch with some changes
  3. the maintainer just does it on their own in a new commit, closes the student's PR

I see all of these happen regularly. Not every open source project is a collaborative project, and I spend a lot of time these days trying to steer students toward and away from certain communities. There's no point spending time on a project that doesn't want your involvement. The projects that do this well take a raw, inexperienced contributor and encourage them to grow, and in so doing, help to invest in the future health of the project: "if you can fix this bug, you can fix these bugs too..."

  • "This is great! While you're fixing this, can you also fix this other related bug?" This is a great way to draw contributors deeper into a project. Often it means assigning them new bugs (i.e., not adding to the current PR) and expanding their involvement in the code. I've seen this go very well a lot of times across many projects. To be honest, I don't know why more maintainers don't just assign people bugs (I do it all the time). The worst that will happen is that people will say "no," or simply ignore the request. In many cases, though, I find that people step up and enjoy the responsibility and inclusion. What you're really saying is "I think you can do this," and people need to hear that.

  • "Thank you for this fix! We've made you a collaborator". I'm aware that this approach won't work for every project (e.g., security issues). But I see many projects doing it, and it's a fascinating approach. When someone shows up and is both willing and able to contribute, why not make them part of your core group? This week I saw one of my students not only fix bugs in a big project, bug also get contrib rights and assigned other bugs to review. That's an incredible gesture of confidence and inclusion.

  • "We really appreciated your contribution, here's a thank-you". When it happens, it's nice, and I saw it happen a few times this week. It can take a few forms, for example, sending people swag. This is beyond the reach of lots of projects, since it has a real-world cost for the swag, shipping, the time of the person co-ordinating it. For projects that can't send a physical thank-you, there are other ways. For example, I see prominent people/projects in the open source world use their social media presence to draw attention to contributor work. It means a lot to have someone you respect in a project take the time to thank you publicly, or to highlight your contribution. These acts build gratitude into your community, and help to encourage people to keep going.

  • "Why don't we pair program on bugs sometime." A few senior people have reached out to students contributing patches and suggested they work together in real-time. In one case that meant having the student go into their office and meeting in real life. In a few other cases it meant using remote screen sharing apps and video conferencing. What an amazing thing to spend time helping to level-up a committed contributor, and what a sign of confidence for someone who is feeling like an imposter working with so many advanced developers.

Even though it doesn't always go smoothly, I'm still a big believer in having students work on real bugs. Every interaction with the open source community is a learning opportunity, even if all it does is teach you that you don't want to do it again. Thankfully, that's not usually what happens, and even after 15 years of this, I'm still seeing new things I can't predict.

If you're an open source dev or maintainer, I'd encourage you to try submitting some PRs to a project you've never worked on before. You'll be surprised at the hoops you have to go through (they always seem more sane on your own projects), how you feel waiting on a delayed response to your fix, and seeing how you're treated outside your home turf. To keep myself honest and aware of this, I try to contribute fixes to lots of new projects. It's very humbling. Thankfully, if you've lost touch with what it's like to be new, just go fix a bug in a project you don't know using a technology you've never seen before. You'll learn a lot, not least about yourself, and it might affect how you respond to people in your own projects.

Anyway, I'm sure I'll forget I've written this post again and rewrite it next year. Until then.

The Firefox FrontierShare Exactly What You See On-Screen With Firefox Screenshots

A “screenshot” is created when you capture what’s on your computer screen, so you can save it as a reference, put it in a document, or send it as an … Read more

The post Share Exactly What You See On-Screen With Firefox Screenshots appeared first on The Firefox Frontier.

The Mozilla Blog20 Big Ideas to Connect the Unconnected

The National Science Foundation and Mozilla are supporting projects that keep the web accessible, decentralized, and resilient

 

Last year, the National Science Foundation (NSF) and Mozilla announced the Wireless Innovation for a Networked Society (WINS) challenges: $2 million in prizes for big ideas to connect the unconnected across the U.S.

Today, we’re announcing our first set of winners: 20 bright ideas from Detroit, Cleveland, Albuquerque, New York City, and beyond. The winners are building mesh networks, solar-powered Wi-Fi, and network infrastructure that fits inside a single backpack. Winning projects were developed by veteran researchers, enterprising college students, and everyone in-between.

What do all these projects have in common? They’re affordable, scalable, open-source, and secure.

“Some 34 million Americans — many of them located in rural communities and on tribal lands — lack high-quality Internet access,” says Jim Kurose, Assistant Director of NSF for Computer and Information Science and Engineering (CISE). “By supporting ideas like the ones that have surfaced through the WINS challenges, Internet access could be expanded to potentially millions of Americans, enabling many social and economic opportunities that come with connectivity.”

“As the value of being connected to the Internet steadily increases, Americans without affordable access to the net are increasingly excluded from a world of social, educational, and economic possibility,” adds Mozilla Fellow and WINS judge Steve Song. “The 20 projects short-listed are evidence of the potential that now exists for thoughtful, committed citizens to build affordable, scalable, secure communication infrastructure wherever it is needed.”

The 20 Stage 1 winners presented rigorously-researched design concepts and will receive between $10,000 and $60,000 each. Winners were selected by a panel of judges from organizations like Nokia, Columbia University, and the Raspberry Pi Foundation.

Up next: All winning teams — along with more than 100 other WINS submissions — are now invited to build working prototypes as part of the second stage of the competition. In August, these finalists will provide live demonstrations of their prototypes at an event in Mountain View, CA. Final awards, ranging from $50,000 to $400,000, will be announced in the fall of 2018.

 

OFF THE GRID INTERNET CHALLENGE WINNERS

When disasters strike, communications networks are among the first pieces of critical infrastructure to overload or fail. These 10 creative ideas being recognized with design prizes leverage both the internet’s decentralized design and current wireless technology to keep people connected to each other — and to vital messaging and mapping services — in the aftermath of earthquakes, hurricanes, and other disasters.

A schematic of Project Lantern | courtesy of Paper & Equator

[1] Project Lantern | First Place ($60,000) A Lantern is a keychain-sized device that hosts decentralized web apps with local maps, supply locations, and more. These apps are pushed to Lanterns via long-range radio and Wi-Fi, and then saved offline to browsers for continued use. Lanterns can be distributed by emergency responders and are accessed by citizens through a special-purpose Wi-Fi network supported by the Lanterns. Project by Paper & Equator in New York, NY in collaboration with the Shared Reality Lab at McGill University; learn more.

Hardware components for HERMES | courtesy of Rhizomatica

[2] HERMES | Second Place ($40,000) HERMES (High­-frequency Emergency and Rural Multimedia Exchange System) is autonomous network infrastructure. It enables local calling, SMS, and basic OTT messaging, all via equipment that can fit inside two suitcases, using GSM, Software Defined Radio and High-Frequency radio technologies. Project by Rhizomatica.

[3] Emergency LTE | Third Place ($30,000) Emergency LTE is an open-source, solar- and battery-powered cellular base station that functions like an autonomous LTE network. The under-50-pound unit features a local web server with apps that allow emergency broadcasts, maps, messaging, and more. Project lead: Dr. Spencer Sevilla in Seattle, WA.

[4] The Next­-Generation, Disaster Relief Mobile Phone Mesh Network | Honorable Mention ($10,000) This project provides a phone­-to-­phone mesh network that’s always on, even if all other systems are offline. A goTenna Mesh device unlocks connectivity using ISM radio bands, then pairs with Android and iOS phones to provide messaging and mapping, as well as back-haul connectivity when available. Project by goTenna in Brooklyn, NY; see the network map here & learn more.

[5] G.W.N. | Honorable Mention ($10,000) G.W.N. (Gridless Wireless Network) leverages ISM radio bands, Wi-Fi modules, and antennae to provide connectivity. When users connect to these durable 10-pound nodes, they can locate nearby shelters or alert emergency responders. Project lead: Dr. Alan Mickelson in Boulder, CO; learn more.

[6] Wind: Off­-Grid Services for Everyday People | Honorable Mention ($10,000) Wind uses Bluetooth, Wi-Fi Direct, and physical infrastructure nodes built from common routers to create a peer-to-peer network. The project also features a decentralized software and content distribution system. By Guardian Project in New York; learn more.

[7] Baculus | Honorable Mention ($10,000) Baculus features a telescoping antennae/flag, a Wi-Fi access point, small computer, GPS transceiver, software defined radio, and battery, all housed inside a rolling backpack. The project provides applications like maps and message boards over an ad-hoc, self-repairing Wi-Fi network. Project Lead: Jonathan Dahan in New York; Design Lead: Ariel Cotton; learn more.

[8] Portable Cell Initiative | Honorable Mention ($10,000) This project deploys a “microcell,” or temporary cell tower, in the aftermath of a disaster. The project uses software defined radio (SDR) and a satellite modem to enable voice calls, SMS, and data services. It also networks with nearby microcells. Project lead: Arpad Kovesdy in Los Angeles, CA; learn more.

[9] Othernet Relief Ecosystem | Honorable Mention ($10,000) Othernet Relief Ecosystem (O.R.E.) is an extension of Dhruv’s Othernet installations in Brooklyn, NY. These installations stem from a long tradition of mesh networking wherein the OpenWRT firmware alongside the B.A.T.M.A.N. protocol run on Ubiquiti hardware to form large-­scale local area networks. Each island of connectivity can be connected to each other using point-to-point antennas. A toolset of lightweight applications can live on these networks. Project lead: Dhruv Mehrotra in New York, NY; learn more.

[10] RAVE | Honorable Mention ($10,000) RAVE (Radio­-Aware Voice Engine) a push-to-talk mobile application providing high-fidelity audio communication via a peer-to-peer Bluetooth or Wi-Fi connection. Multiple RAVE devices form a multi-hop network capable of extending communication over longer distances. RAVE’s range can be extended via a network of relay nodes. These inexpensive, battery-powered devices automatically set up a mesh network that extends real-time voice and internet access throughout a whole community, and text communication over several miles. Project by Throneless in Washington, D.C.; learn more.

 

SMART COMMUNITY NETWORKS CHALLENGE WINNERS

Many communities across the U.S. lack reliable internet access. Sometimes commercial providers don’t supply affordable access; sometimes a particular community is too isolated; sometimes the speed and quality of access is too slow. These 10 creative ideas being recognized with design prizes aim to leverage existing infrastructure — physical or network — to provide high-quality wireless connectivity to communities in need.

An EII installation | courtesy of the Detroit Community Technology Project

[11] Equitable Internet Initiative (EII) | First Place ($60,000) EII uses a system of relays to beam wireless broadband from a local ISP to vulnerable neighborhoods. The system includes solar-powered batteries, an intranet with apps, and training so local users can build and maintain the network. By the Detroit Community Technology Project, sponsored by Allied Media Projects in Detroit, MI; learn more.

 

[12] NoogaNet | Second Place ($40,000) NoogaNet provides wireless access within a defined neighborhood by leveraging utility pole-­mounted Wi-Fi nodes, point­-to-­multipoint millimeter wave, and mesh technologies. The project also includes user training for installing, utilizing, and managing a wireless mesh node. Project by the Enterprise Center in Chattanooga, TN; learn more.

[13] Southern Connected Communities Network | Third Place ($30,000) This project entails a broadband tower — and eventually, series of towers — that can deliver 1-Gbps speeds wirelessly to anyone in a 25-mile radius via public spectrums. The towers will be controlled by community members in rural Appalachia and the South who are currently underserved by major ISPs. Project by the Highlander Research and Education Center in New Market, TN.

[14] Solar Mesh | Honorable Mention ($10,000) This project integrates mesh Wi-Fi access points into solar-powered light poles in order to provide connectivity to low-income households. The bandwidth is provided by T­Mobile. Project by the San Antonio Housing Authority in TX.

[15] Connect the Unconnected | Honorable Mention ($10,000) Using a fixed wireless backbone network, this project provides public housing and homeless shelter residents in a two-­square-mile radius with connectivity at speeds up to 35 Mb/s using point-to-point and point-to-multipoint millimeter wave technology. Residents also receive digital literacy training on refurbished devices that they are permitted to keep upon graduation. Project by DigitalC in Cleveland, OH.

[16] Repairable Community Cellular Networks­ | Honorable Mention ($10,000) This project equips residents with sensors and software to carry out basic repairs and precautionary measures on OpenCellular base stations. The goal: decrease the likelihood and duration of service interruptions. Project by University of Washington in Seattle; learn more.

[17] People’s Open Network | Honorable Mention ($10,000) The People’s Open Network uses off-­the-­shelf multi­band Wi-Fi hardware and custom open-source software to connect and automatically route internet traffic from apartment to apartment and house to house in a decentralized manner. Project by sudomesh in Oakland, CA; learn more.

[18] BarelasGig | Honorable Mention ($10,000) This project uses modern millimeter wave (mmW) technology to provide wireless gigabit backhaul and last-mile connectivity at a fraction of the cost of full fiber deployment. Project lead: Michael Sanchez in Albuquerque, NM.

[19] NYC Mesh Community Network | Honorable Mention ($10,000) This project uses high-­bandwidth sector antennas, internet exchange points, mesh protocols, and solar batteries to create a community-owned, decentralized network. Project by NYC Mesh in New York City, NY; learn more.

[20] Telehub 2.0 ­- DuBois MAN | Honorable Mention ($10,000) This project provides wireless connectivity to underserved neighborhoods and school districts through radio infrastructure mounted on light poles. The project also features educational-technology initiatives to improve academic performance. Project by W.E.B. DuBois Learning Center in Kansas City, MO; learn more.

The post 20 Big Ideas to Connect the Unconnected appeared first on The Mozilla Blog.

Daniel PocockHacking at EPFL Toastmasters, Lausanne, tonight

As mentioned in my earlier blog, I give a talk about Hacking at the Toastmasters club at EPFL tonight. Please feel free to join us and remember to turn off your mobile device or leave it at home, you never know when it might ring or become part of a demonstration.

Andy McKayBugzilla Triage Helper

In the process of shipping Firefox 57, we found a couple of things out about bugs and Bugzilla.

There are an awful lot of bugs filed against Firefox and all it's components in the course of a release. Keeping on top of that is hard and some teams have adopted some policies to help with that (for example see: design-decision-needed).

Having a consistent approach to bugs across the organisation makes it a little easier for everyone to get a feel for what's going.

Sometimes the burden of setting all the right values can be tiring and prone to error. For example: setting a bug to P1 could involve the following:

  • changing the priority
  • setting the right flag for Firefox version
  • changing the status
  • turning off the cc flag
  • setting a whiteboard value

In Austin, we had a chat about this and it was suggested we make a tool to provide a consistent approach to this. A few weeks later I threw Bugzilla Triage Helper onto Github and addons.mozilla.org. This tool is a content script that inserts an overlay onto Bugzilla.

This gives you a simple button (or keyboard shortcut) that does all of the above. It will even submit the bug for you so you can get on to the next bug.

Of course, it turns out that everyone's workflow is slightly different. So I recently added the ability to add "additional" actions. These are dynamically looked by the product and component. They are specified in JS in the repo, so triage owners who want a slightly different wrinkle on the flow can alter that on github.

Some examples:

  • one team sets a blocking bug number of each bug that blocks release, so they've extended P1 to do that.
  • one team has 5 or 6 common replies asking for more information and informing the user on how to get that information. So they've extended the "Canned" response option to select those.

The extension is a content script that alters the DOM, which means its full of icky DOM code to manipulate the UI. I could do this through the API, but there's a couple of reasons I do it in the UI:

  • I really don't want to build a new UI to Bugzilla
  • most people triaging are looking at the bug in the UI
  • a user might want to add more to the bug not captured in the tool in ad hoc process
  • Bugzilla has lots of things like mid-air collision detection, recursive blocking and so on that are surfaced in the UI
  • see the first point again

At this point I'm starting to use it in some triages and I hope others will too and give me some feedback or even better some patches.

This Week In RustThis Week in Rust 222

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is afl.rs, a by now pretty well-known fuzzing tool for Rust. Thanks to Philipp Hansch for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

95 pull requests were merged in the last week

And my personal favourite:

New Contributors

  • Alex Crawford
  • Antoni Boucher
  • Artyom Pavlov
  • Brad Gibson
  • Jacob Hughes
  • Mazdak Farrokhzad
  • Paolo Teti
  • Pramod Bisht
  • roblabla
  • Ross Light
  • Shaun Steenkamp

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

The community team is trying to improve outreach to meetup organisers. Please fill out their call for contact info if you are running or used to run a meetup.

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Daniel PocockSwissPost putting another nail in the coffin of Swiss sovereignty

A few people have recently asked me about the SwissID, as SwissPost has just been sending spam emails out to people telling them "Link your Swiss Post user account to SwissID".

This coercive new application of technology demands users email addresses and mobile phone numbers "for security". A web site coercing people to use text messages "for security" has quickly become a red flag for most people and many blogs have already covered why it is only an illusion of security, putting your phone account at risk so companies can profit from another vector for snooping on you.

SwissID is not the only digital identity solution in Switzerland but as it is run by SwissPost and has a name similar to another service it is becoming very well known.

In 2010 they began offering a solution which they call SuisseID (notice the difference? They are pronounced the same way.) based on digital certificates and compliant with Swiss legislation. Public discussion focussed on the obscene cost with little comment about the privacy consequences and what this means for Switzerland as a nation.

Digital certificates often embed an email address in the certificate.

With SwissID, however, they have a web site that looks like little more than vaporware, giving no details at all whether certificates are used. It appears they are basically promoting an app that is designed to harvest the email addresses and phone numbers of any Swiss people who install it, lulling them into that folly by using a name that looks like their original SuisseID. If it looks like phishing, if it feels like phishing and if it smells like phishing to any expert takes a brief sniff of their FAQ, then what else is it?

The thing is, the original SuisseID runs on a standalone smartcard so it doesn't need to have your mobile phone number, have permissions to all the data in your phone and be limited to working in areas with mobile phone signal.

The emails currently being sent by SwissPost tell people they must "Please use a private e-mail address for this purpose" but they don't give any information about the privacy consequences of creating such an account or what their app will do when it has access to read all the messages and contacts in your phone.

The actions you can take that they didn't tell you about

  • You can post a registered letter to SwissPost and tell them that for privacy reasons, you are immediately retracting the email addresses and mobile phone numbers they currently hold on file and that you are exercising your right not to give an email address or mobile phone number to them in future.
  • If you do decide you want a SwissID, create a unique email address for it and only use that email address with SwissPost so that it can't be cross-referenced with other companies. This email address is also like a canary in a coal mine: if you start receiving spam on that email address then you know SwissPost/SwissID may have been hacked or the data has been leaked or sold.
  • Don't install their app and if you did, remove it and you may want to change your mobile phone number.

Oddly enough, none of these privacy-protecting ideas were suggested in the email from SwissPost. Who's side are they on?

Why should people be concerned?

SwissPost, like every postal agency, has seen traditional revenues drop and so they seek to generate more revenue from direct marketing and they are constantly looking for ways to extract and profit from data about the public. They are also a huge company with many employees: when dealing with vast amounts of data in any computer system, it only takes one employee to compromise everything: just think of how Edward Snowden was able to act alone to extract many of the NSA's most valuable secrets.

SwissPost is going to great lengths to get accurate data on every citizen and resident in Switzerland, including deploying an app to get your mobile phone number and demanding an email address when you use their web site. That also allows them to cross-reference with your IP addresses.

  • Any person or organization who has your email address or mobile number may find it easier to get your home address.
  • Any person or organization who has your home address may be able to get your email address or mobile phone number.
  • When you call a company from your mobile phone and their system recognizes your phone number, it becomes easier for them to match it to your home address.
  • If SwissPost and the SBB successfully convince a lot of people to use a SwissID, some other large web sites may refuse to allow access without getting you to link them to your SwissID and all the data behind it too. Think of how many websites already try to coerce you to give them your mobile phone number and birthday to "secure" your account, but worse.

The Google factor

The creepiest thing is that over seventy percent of people are apparently using Gmail addresses in Switzerland and these will be a dependency of their registration for SwissID.

Given that SwissID is being promoted as a solution compliant with ZertES legislation that can act as an interface between citizens and the state, the intersection with such a powerful foreign actor as Gmail is extraordinary. For example, if people are registering to vote in Switzerland's renowned referendums and their communication is under the surveillance of a foreign power like the US, that is a mockery of democracy and it makes the allegations of Russian election hacking look like child's play.

Switzerland's referendums, decentralized system of Government, part-time army and privacy regime are all features that maintain a balance between citizen and state: by centralizing power in the hands of SwissID and foreign IT companies, doesn't it appear that the very name SwissID is a mockery of the Swiss identity?

Yellow in motion

No canaries were harmed in the production of this blog.

Don MartiThe tracker will always get through?

(I work for Mozilla. None of this is secret. None of this is Mozilla policy. Not speaking for Mozilla here.)

A big objection to tracking protection is the idea that the tracker will always get through. Some people suggest that as browsers give users more ability to control how their personal information gets leaked across sites, things won't get better for users, because third-party tracking will just keep up. On this view, today's easy-to-block third-party cookies will be replaced by techniques such as passive fingerprinting where it's hard to tell if the browser is succeeding at protecting the user or not, and users will be stuck in the same place they are now, or worse.

I doubt this is the case because we're playing a more complex game than just trackers vs. users. The game has at least five sides, and some of the fastest-moving players with the best understanding of the game are the adfraud hackers. Right now adfraud is losing in some areas where they had been winning, and the resulting shift in adfraud is likely to shift the risks and rewards of tracking techniques.

Data center adfraud

Fraudbots, running in data centers, visit legit sites (with third-party ads and trackers) to pick up a realistic set of third-party cookies to make them look like high-value users. Then the bots visit dedicated fraudulent "cash out" sites (whose operators have the same third-party ads and trackers) to generate valuable ad impressions for those sites. If you wonder why so many sites made a big deal out of "pivot to video" but can't remember watching a video ad, this is why. Fraudbots are patient enough to get profiled as, say, a car buyer, and watch those big-money ads. And the money is good enough to motivate fraud hackers to make good bots, usually based on real browser code. When a fraudbot network gets caught and blocked from high-value ads, it gets recycled for lower and lower value forms of advertising. By the time you see traffic for sale on fraud boards, those bots are probably only getting past just enough third-party anti-fraud services to be worth running.

This version of adfraud has minimal impact on real users. Real users don't go to fraud sites, and fraudbots do their thing in data centers Doesn't everyone do their Christmas shopping while chilling out in the cold aisle at an Amazon AWS data center? Seems legit to me. and don't touch users' systems. The companies that pay for it are legit publishers, who not only have to serve pages to fraudbots—remember, a bot needs to visit enough legit sites to look like a real user—but also end up competing with adfraud for ad revenue. Adfraud has only really been a problem for legit publishers. The adtech business is fine with it, since they make more money from fraud than the fraud hackers do, and the advertisers are fine with it because fraud is priced in, so they pay the fraud-adjusted price even for real impressions.

What's new for adfraud

So what's changing? More fraudbots in data centers are getting caught, just because the adtech firms have mostly been shamed into filtering out the embarassingly obvious traffic from IP addresses that everyone can tell probably don't have a human user on them. So where is fraud going now? More fraud is likely to move to a place where a bot can look more realistic but probably not stay up as long—your computer or mobile device. Expect adfraud concealed within web pages, as a payload for malware, and of course in lots and lots of cheesy native mobile apps.The Google Play Store has an ongoing problem with adfraud, which is content marketing gold for Check Point Software, if you like "shitty app did WHAT?" stories. Adfraud makes way more money than cryptocurrency mining, using less CPU and battery.

So the bad news is that you're going to have to reformat your uncle's computer a lot this year, because more client-side fraud is coming. Data center IPs don't get by the ad networks as well as they once did, so adfraud is getting personal. The good news, is, hey, you know all that big, scary passive fingerprinting that's supposed to become the harder-to-beat replacement for the third-party cookie? Client-side fraud has to beat it in order to get paid, so they'll beat it. As a bonus, client-side bots are way better at attribution fraud (where a fraudulent ad gets credit for a real sale) than data center bots.

Users don't have to get protected from every possible tracking technique in order to shift the web advertising game from a hacking contest to a reputation contest. It often helps simply to shift the advertiser's ROI from negative-externality advertising below the ROI of positive-externality advertising.
Advertisers have two possible responses to adfraud: either try to out-hack it, or join the "flight to quality" and cut back on trying to follow big-money users to low-reputation sites in the first place. Hard-to-detect client-side bots, by making creepy fingerprinting techniques less trustworthy, tend to increase the uncertainty of the hacking option and make flight to quality relatively more attractive.

Chris CooperExperiments in productivity: the shared bug queue

Maybe you have this problem too

You manage or are part of a team that is responsible for a certain functional area of code. Everyone on the team is at different points in there career. Some people have only been there a few years, or maybe even only a few months, but they’re hungry and eager to learn. Other team members have been around forever, and due to that longevity, they are go-to resources for the rest of your organization when someone needs help in that functional area. More-senior people get buried under a mountain of review requests, while those less-senior engineers who are eager to help and grow their reputation get table scraps.

This is the situation I walked into with the Developer Workflow team.

This was the first time that Mozilla had organized a majority (4) of build module peers in one group. There are still isolated build peers in other groups still, but we’ll get to that in a bit.

With apologies to Ted, he’s the elder statesman of the group, having once been the build module owner himself before handing that responsiblity off to Greg (gps), the current module owner. Ted has been around Mozilla for so long that he is a go-to resource for not only build system work but many other projects, e.g. crash analysis, he’s been involved with. In his position as module owner, Greg bears the brunt of the current review workload for the build system. He needs to weigh-in on architectural decisions, but also receives a substantial number of drive-by requests simply because he is the module owner.

Chris Manchester and Mike Shal by contrast are relatively new build peers and would frequently end up reviewing patches for each other, but not a lot else. How could we more equitably share the review load between the team without creating more work for those engineers who were already oversubscribed?

Enter the shared bug queue

When I first came up with this idea, I thought that certainly this must have been tried at some point in the history of Mozilla. I was hoping to plug into an existing model in bugzilla, but alas, such a thing did not already exist. It took a few months of back-and-forth with our reisdent Bugmaster at Mozilla, Emma, to get something setup, but by early October, we had a shared queue in place.

How does it work?

ICP

We created a fictitious meta-user, core-build-config-reviews@mozilla.bugs. Now whenever someone submits a patch to the Core::Build Config module in bugzilla, the suggested reviewer always defaults to that shared user. Everyone on the teams watches that user and pulls reviews from “their” queue.

That’s it. No, really.

Well, okay, there’s a little bit more process around it than that. One of the dangers of a shared queue is that since no specific person is being nagged for pending reviews, the queue could become a place where patches go to die. As with any defect tracking system, regular triage is critically important.

Is it working?

In short: yes, very much so.

Subjectively, it feels great. We’ve solved some tricky people problems with a pretty straightforward technical/process solution and that’s amazing. From talking to all the build peers, they feel a new collective sense of ownership of the build module and the code passing through it. The more-senior people feel they have more time to concentrate on higher level issues or deeper reviews. The less-senior people are building their reputations, both among the build peers and outside the group to review requesters.

Numerically speaking, the absolute number of review requests for the Core::Build Config module is consistent since the adoption of the shared queue. The distribution of actual reviewers has changed a lot though. Greg and Ted still end up reviewing their share of escalated requests — it’s still possible to assign reviews to specific people in this system — but Mike Shal and Chris have increased their review volume substantially. What’s even more awesome is that the build peers who are *NOT* in the Developer Workflow team are also fully onboard, regularly pulling reviews off the shared queue. Kudos to Nick Alexander, Nathan Froyd, Ralph Giles, and Mike Hommey for also embracing this new system wholeheartedly.

The need for regular triage has also provided another area of growth for the less-senior build peers. Mike Shal and Chris Manchester have done a great job of keeping that queue empty and forcing the team to triage any backlog each week in our team meeting.

Teh Future

When we were about to set this up in October, I almost pulled the plug.

Over the next six months, Mozilla is planning to switch code review tools from mozreview/splinter to phabricator. Phabricator has more modern built-in tools like Herald that would have made setting up this shared queue a little easier, and that’s why I paused…briefly

Phabricator will undoubtedly enable a host of quality-of-life improvements for developers when it is deployed, but I’m glad we didn’t wait for the new system. Mozilla engineers are already getting accustomed to the new workflow and we’re reaping the benefits *right now*.

David HumphreyEdge Cases

Yesterday I was looking at a bug with some students. It related to a problem in a file search feature: a user found that having a folder-name surrounded with braces (e.g., {name}) meant that no search results were ever reported within the folder. "This isn't a bug," one of them told me, "because no one would ever do this." I found this fascinating on a number of levels, not least because someone had in fact done it, and even gone so far as to file the bug.

I love edge cases. I'm pretty sure it started during the years I worked on Processing.js--anyone who has worked in projects with comprehensive test suites probably knows what I mean. With that project we had a massive set of existing Java-based Processing code that we needed to make work on the web. Anything that was possible in the Java implementation needed to work 1:1 in our JavaScript version. It was amazing the spectacular variety and extremes we'd see in how people wrote their code. Every time we'd ship something, a user would show up and tell us about some edge case we'd never seen before. We'd fix it, add tests, and repeat the cycle.

Doing this work over many years, I came to understand that growth happens at a project's edge rather than in the middle: the code we wanted to work on (some pet optimization or refactor) was rarely where we needed to be doing. Rather, the most productive space we could occupy was at the unexplored edges of our territory. And to do this, to really know what was "out there," we needed help; there was too much we couldn't see, we needed people to stumble across the uncultivated border and tell us what needed attention.

The thing about an edge case bug is that its seemingly trivial nature often points to something more fundamental: a total lack of attention to something deep in your system. The assumptions underpinning your entire approach, the ones you don't even know that you've made, suddenly snap into sharp focus and you're forced to face them for the first time. Even well designed programs can't avoid this: whatever you optimize for, whatever cases you design and test against, you necessarily have to leave things out in order to ship something. At which point reports of edge case bugs become an incredible tool in the fight to open your territory beyond its current borders.

It's easy to think that what I'm describing only applies to small or inexperienced teams. "Surely if you just engineered things properly from the start, you could avoid this mess." Yet big, well-funded, engineering powerhouses struggle just the same. Take Apple, for example. This week they've had to deal with yet another show-stopper bug in iOS--if you paste a few characters of text into your phone it crashes. There's a great write-up about the bug by @manishearth:

Basically, if you put this string in any system text box (and other places), it crashes that process...The original sequence is U+0C1C U+0C4D U+0C1E U+200C U+0C3E, which is a sequence of Telugu characters: the consonant ja (జ), a virama ( ్ ), the consonant nya (ఞ), a zero-width non-joiner, and the vowel aa ( ా)...then I saw that there was a sequence in Bengali that also crashed. The sequence is U+09B8 U+09CD U+09B0 U+200C U+09C1, which is the consonant “so” (স), a virama ( ্ ), the consonant “ro” (র), a ZWNJ, and vowel u ( ু).

Want to know which languages Apple's iOS engineers aren't using on a daily basis or in their tests? It's easy to point a knowing finger and say that they should know better, but I wonder how well tested your code is in similar situations? You can't believe how many bugs I find with students in my class, whose usernames and filepaths contain Chinese or other non-English characters, and software which blows up because it was only ever tested on the ASCII character set.

The world is big place, and we should all be humbled by its diversity of people and places. The software we build to model and interact with it will always be insufficient to the needs of real people. "Everyone does this..." and "No one would ever do that..." are naive statements you only make until you've seen just how terrible all code is at the edges. After you've been burned a few times, after you've been humbled and learned to embrace your own finite understanding of the world, you come to love reports of edge cases, and the opportunity they provide to grow, learn, and connect with someone new.

Gervase MarkhamGoing Home

I’m going home.

As some of my readers will know, my cancer (read that link if the fact I have cancer is new to you) has been causing difficulty in my liver this year, and recently we had a meeting with my consultant to plot a way forward. He said that recent scans had shown an increased growth rate of some tumours (including the liver one), and that has overwhelmed my body’s ability to cope with the changes cancer brings. The last two months have seen various new symptoms and a reasonably rapid decline in my general health. The next two months will be more of the same unless something is done.

After some unsuccessful procedures on my liver over the course of this year, the last option is radiotherapy to try and shrink the problem tumour; we are investigating that this week. But even if that succeeds, the improvement will be relatively short-lived – perhaps 3-6 months – as the regrowth rate will be faster. If radiotherapy is not feasible or doesn’t work, the timelines are rather shorter than that. My death is not imminent, but either way I am unlikely to see out 2018. In all this, my wife, my children and I are confident that God is in charge and his purposes are good, and we can trust him and not be afraid of what is coming. We don’t know what the future holds for each of us, but he does.

We’ve taken this news as a sign to make some significant changes. The most relevant to readers of this blog is that I am stepping away from Mozilla so I can spend more time focussed on the most important things – my relationships with Jesus, and with my family. I love my work, and God has blessed my time at Mozilla and enabled me to do much which I think have been good for the Internet and the world. However, there are things in life which are much more important, and it’s now time for others to take up those projects and causes and carry them into the future. I have every confidence in my colleagues and fellow Mozillians that this will be done with the greatest care and skill. The CA program, MOSS, and Mozilla’s policy work are all in very good hands.

If you pray, please pray that we would make wise decisions about what to do when, and that we would live through this process in a way that brings glory to Jesus.

In case it’s of interest, we have set up a read-only mailing list which people can join to keep informed about what is going on, and to hear a bit about how we are handling this and what we would like prayer for. You can subscribe to the list using that link, if you have a Google account. If you don’t you can still join by emailing lightandmomentary+subscribe@googlegroups.com.

“Though outwardly we are wasting away, yet inwardly we are being renewed day by day. For our light and momentary troubles are achieving for us an eternal glory that far outweighs them all. So we fix our eyes not on what is seen, but on what is unseen, since what is seen is temporary, but what is unseen is eternal.” — 2 Cor 4:16-18.

If I have done anything good in 18 years at Mozilla, may God get all the glory.

Daniel StenbergWhy is your email in my car?

I got this email in German…

Subject: Warumfrage

Schönen guten Tag

Ich habe auf meinem Navi aus meinem Auto einen Riesentext wo unter anderem Ihre Mailadresse zu sehen ist?

Können Sie mich schlau machen was es damit auf sich hat ?

… which translated into English says:

I have a huge text on my sat nav in my car where, among other things, your email address can be seen?

Can you tell me what this is all about?

I replied (in English) to the question, explained how I’m the main author of curl and how his navigation system is probably using this. I then asked what product or car he saw my email in and if he could send me a picture of my email being there.

He responded quickly and his very brief answer only says it is a Toyota Verso from 2015. So I put an image of such a car on the top of this post.

K Lars LohnLars and the Real Internet of Things, Part 3

In Part 2 of this series, I demonstrated loading and running the Things Gateway by Mozilla software onto a Raspberry Pi.  I showed how to use a DIGI XStick, which implements the Zigbee protocol, to run a light bulb and smart plug.

Today, I'm going to take that system and make it work with the Z-Wave protocol, too.  Then I'll demonstrate how to use a rule to make a Z-Wave switch follow the same state as a Zigbee switch.

Goal: add an Aoetec Z-Stick to the existing project, pair it with some Z-Wave devices, and, finally, demonstrate a rule that crosses over the barrier between the two protocols.  This addition to the project will maintain total local control so there should be no communication outside to the Internet.







Requirements & Parts List:

ItemWhat's it for?Where I got it
The Raspberry Pi and associated hardware from Part 2 of this series.this is the base platform that we'll be adding ontofrom Part 2 of this series
Aeotec Z-Stickan adapter that enables the Things Gateway to talk Z-WaveAmazon
Aeotec Smart Switch 6 ZW096-A02a Z-Wave switch to controlAmazon
GE ZW1403 Smart Switcha Z-Wave switch to control (this device has been temperamental & may not be the best choice)Amazon


This part of the project is going to be ridiculously easy. We're starting with the system as it stood at the end of the last posting: a Things Gateway on an Raspberry Pi that only understands Zigbee devices.

To start, I unplugged the Raspberry Pi from its power supply.  I plugged the Aoetec Z-Stick into a USB port on the Raspberry Pi - recognizing that the existing DIGI XStick had to be repositioned for both to fit.  I then reapplied power to the Raspberry Pi and it booted.  I assembled all the lights for this experiment.

From left to right: a grocery store LED bulb on the Z-Wave Aeotec Smart Switch 6 to be called AEO 007; an LED bulb disguised as a old fashioned filament bulb on the Zigbee Sylvania Smart Plug to be called SYLV 002; a Zigbee CREE dimmable bulb to be called CREE 006; a bulk Costco compact fluorescent from 2013 on a bulky and sideways Z-Wave GE Smart Switch to be called GE 001.

The Aoetec Z-Stick is a feature packed USB device that implements the Z-Wave+ protocol.  My first advice, though, is to not read the instructions.  You see, the Things Gateway support for Z-Wave is still in infancy and doesn't particularly fit well into the pairing methods traditionally used by Z-Wave devices. 

The Z-Stick instructions will tell you to pair  Z-Wave devices in a manner that the Things Gateway does not yet understand.  I suggest that you hold off and pair Z-Wave devices only through the Things Gateway software.  This will save you a lot of pain.

I went back to my browser and entered "gateway.local" in the URL bar.  I had to login using the credentials that I provided during the previous setup.  That lead me to this screen:
I pressed the "+" button to start pairing the two new Z-Wave devices.  The GE device behaves differently than the Aeotec device.  Normally when pairing Z-Wave things, one must press a button on the device to be added to the system.  However the GE device comes from the factory ready to go without having this button pressed.  It shows up on the screen, but the Aeotec device does not.
To to keep track of the devices, I renamed this first Z-Wave device as "GE 001" and pressed the "Save" button.
Now I had to get the Aeotec device added.  I found out the hard way that the little spinner next to "Scanning for new devices..." must still be spinning. In an earlier trial for this demo, I had to press "Done" and then immediately press the "+" button to return to this screen.

The Aeotec Smart Plug uses a more traditional Z-Wave approach to pairing.  I pressed the circular indent on the corner of the plug itself.  Since I had a light plugged into the Smart Plug, it illuminated.  After a few moments, the Smart Plug appeared on the page.  If it doesn't appear for you, wait for the colored lights on the edge of the plug to calm down, and press the button again.

I renamed the plug to a more understandable name before pressing "Save" and "Done".
This gives me four devices connected to my Things Gateway: two that speak Zigbee and two that speak Z-Wave.  I played around with the controls, turning each on and off in turn.  I also made the discovery that support for the Sylvania Smart Plug was not as complete as I thought, as it was not registering anything but on or off.
Now the fun begins, I can cross the boundary between Z-Wave and Zigbee and make these devices work together using rules.

Let's make the compact fluorescent in the Z-Wave GE 001 plug follow whatever state the old fashioned bulb (actually it's LED) in the Zigbee SYLV 002 plug.

I clicked the Menu button and selected Rules
I dragged the SYLV 002 device up into the "device as input" section and then used the drop down box to select the "On" property.
 
Then I dragged the GE 001 device up to the "device as output" section and selected the "On" property.  Then went to the top of the box and named the rule:
Now I used the back arrow in the upper left to return all the way back to the Things screen.  I turned on the SYLV 002 light and immediately the GE 001 light came on, too.   Apparently, the rule system automatically implements the converse rule.  If SYLV 002 is turned off, then GE 001 is turned off too.
This rule, however, works only from the perspective of the SYLV 002 light. The GE 001 light can be operated independently. So the rule doesn't cover all the possibilities. Making another rule from the perspective of the GE 001 light as the leader creates a loop that can have some undesirable behavior. We can exploit this for some silly fun.  I made a set of four rules:
  • AEO 007 on  --->  SYLV 002 on
  • SYLV 002 on --->  CREE 006 on
  • CREE 006 on --->  GE 001 on
  • GE 001 on   --->  AEO 007 off
I was expecting sequencing behavior, but what I got was somewhat reminiscent of linear but with some chaos added for good measure.  It was hard to get it to stop.  I eventually unplugged the power to the Raspberry Pi.  On rebooting, I deleted the rules - I didn't think I had a practical application for such chaos.
What does this mean for rules?  Well it means that it is possible to create infinitely looping dependencies.  Think about how one would make two lights behave mutually exclusively: when one light is on, the other is off.  An initial rule would be like the first example above where one follows another.  The exception is that the target bulb is toggled to the opposite of the leader.  That works fine from the perspective of the leader.  However if the follower bulb is controlled independently, the leader's rule doesn't apply and both lights could be on at the same time.  It is temping to add another rule where follower becomes the leader and vice versa.  However, that sets up an infinite loop.  Try it and you'll see what I mean.

In my next blog posting, I'm going to add some color to this circus with some Phillips HUE lights.

Mozilla Marketing Engineering & Ops BlogMozMEAO SRE Status Report - February 16, 2018

Here’s what happened on the MozMEAO SRE team from January 23 - February 16.

Current work

SRE general

Load Balancers
Cloudflare to Datadog service
  • The Cloudflare to Datadog service has been converted to use a non-helm based install, and is running in our new Oregon-B cluster.
Oregon-A cluster
  • We have a new Kubernetes cluster running in the us-west-2 AWS region that will run support.mozilla.org (SUMO) services as well as many of our other services.

Bedrock

  • Bedrock is moving to a “sqlitened” version in our Oregon-B Kubernetes cluster that removes the dependency on an external database.

MDN

  • The cronjob that performs backups on attachments and other static media broke due to a misconfigured LANG environment variable. The base image for the cronjob was updated and deployed. We’ve also added some cron troubleshooting documentation as part of the same pull request.

  • Safwan Rahman submitted an excellent PR to optimize Kuma document views 🎉🎉🎉.

support.mozilla.org (SUMO)

  • SUMO now uses AWS Simple Email Service (SES) to send email.
  • We’re working on establishing a secure link between SCL3 and AWS for MySQL replication, which will help us signficantly reduce the amount of time needed in our migration window.
  • SUMO is now using a CDN to host static media
  • We’re working on Python-based Kubernetes automation for SUMO based on the Invoke library. Automation includes web, cron and celery deployments, as well as rollout and rollback functionality.
  • Using the Python automation above, SUMO now runs in “vanilla Kubernetes” without Deis Workflow.

Links

Firefox Test PilotImproving the web with small, composable tools

Firefox Screenshots is the first Test Pilot experiment to graduate into Firefox, and it’s been surprisingly successful. You won’t see many people talking about it: it does what you expect, and it doesn’t cover new ground. Mozilla should do more of this.

Small, Composable Tools

One of the inspirations for Firefox Screenshots was user research done in 2015. This research involved interviews with a few dozen people about how they save, share, and recall information. I myself had a chance to be part of several house visits in Rochester, NY. We looked over people’s shoulders while they showed us how they worked.

My biggest takeaways from that research:

  • There is a wide variety of how people manage their information, with many combinations of different tools and complex workflows
  • Everyone is pretty happy with what they are doing
  • People only want small, incremental changes
  • Screenshots are pretty popular

It was surprising to see how complicated and sometimes clearly suboptimal people’s workflows were, while also understanding that each person was happy with what they did. They were happy because they weren’t looking for something new. At any moment most people are settled (satisficed) on a process, and they have better things to do than constantly reconsider those choices.

After learning how they worked, we’d sometimes offer up alternatives and get reactions. The alternatives received lots of crickets. If you could add a tool to existing workflows then there might be interest, but there wasn’t interest in replacing tools unless perhaps it was a one-to-one match. People specifically weren’t interested in integrated tools, ones that improved the entire workflow.

And who among us hasn’t been burned by overenthusiasm for a fully integrated tool? It seems great, then it gets tiring just to keep track, annoying to try to get people to sign up so you can collaborate, some number of things don’t fit into the process, you’ve lost track of your old things, it just feels like work.

Old Philosophies

The Unix philosophy:

  • Write programs that do one thing and do it well.
  • Write programs to work together.
  • Write programs to handle text streams, because that is a universal interface.

This is still what works well, and still what people want! This is also what the web can provide and apps and silos cannot: open composability.

This isn’t the same as APIs and integrated tools: find and grep are not integrated, you don’t have to setup OAuth integration between tail and tee. Things work together because you use them together.

What would the Unix toolset look like on the web? Please speculate! I’ve started structuring some of my own ideas into a set of notes.

Stop Being So Clever

At the time of the user research myself and Donovan had been working on an experiment in page capture — you could think of it like a personal archive.org. We added screenshotting as an entree into what felt like a more advanced tool.

In the end nothing is left of that original concept, and we just have plain screenshots. It hurt to see that all go. Screenshots are not exciting, and they are not innovative, and there is nothing very new about them. And clearly I needed to get over myself.

And so this is a lesson in humility: things don’t have to be new or novel or exciting to be useful. Screenshots is so un-new, so un-novel, so un-exciting that we aren’t even following along with the competition. Mozilla should spend more time here: behind the curve where the big players stopped caring and the little players have a hard time getting any attention. Behind the curve is where the web was a lot more like how Mozilla wants it to be.

There are lots of useful things back here, things that technophiles have appreciated but the wider population doesn’t know how to use. A pastebin. Site archival. Deep linking. Inline linking at all! Scraping. Clipboard management. Etherpad is still the best lightweight collaborative editor. Little stuff, things that don’t try to take over, things that don’t try to leverage the user for corporate benefit. This stuff is not very hard to make, and is affordable to run. Combine that with a commitment to keep the services competently maintained and openly interoperable, and there’s a lot of value to provide. And that’s what Mozilla is in it for: to be of service.

Being Part Of The Web

Screenshots was not easy to make. It was not technically difficult, but it was not easy.

Mozilla has long been reluctant to host user content. Firefox Sync is pointedly encrypted on the client. Before Screenshots the only unencrypted user content the corporation handled was the add-ons and themes on addons.mozilla.org.

Screenshots did not have to have a server component, and it did not have to allow people to upload or share shots within the tool. I take some pride in the fact that, despite all our cultural and legal attitudes at Mozilla, screenshots.firefox.com is a thing. It required a great deal of stubbornness on my part, and at times a pointed blindness to feedback.

In a small way Screenshots makes Mozilla part of the web, not just a window onto the web. This is a direction I think we should take: *.firefox.com links of all kinds should become normal, and you should know that on the other side of the link will be respectful content, it won’t be an avenue for manipulation, and you won’t be a product. Be the change you want to see, right?

Thanks to Wil Clouser and Jared Hirsch for feedback on this post.

Originally published at www.ianbicking.org.


Improving the web with small, composable tools was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Firefox NightlyThese Weeks in Firefox: Issue 32

Highlights

Friends of the Firefox team

(Give a shoutout/thanks to people for helping fix and test bugs. Introductions)

 

Project Updates

Add-ons

Activity Stream

Performance

Privacy/Security

Search and Navigation

Address Bar & Search
Places
More

Sync / Firefox Accounts

Web Payments

Hacks.Mozilla.OrgCreate VR on the Web using Unity3D

We are happy to announce our latest tool by Mozilla, Unity WebVR Assets. It is free to download and available now on the Unity Asset Store. This tool allows creators to publish and share VR experiences they created in Unity on the open web, with a simple URL or link. These experiences can then be viewed with any WebVR enabled browser such as Firefox (using the Oculus Rift or HTC VIVE) and Microsoft Edge (using a Windows Mixed Reality headset).

Try it out now!

With the release of these assets, we hope to bridge the frictionless distribution, ease of use and accessibility of the Web with the best-in-class content creation tools from Unity.  We believe this is a great fit for demos, marketing, news content, and any case where traditional application flows may be too time-consuming or troublesome for users.

Since the assets utilize the standards-based WebVR API, it removes the need for any platform specific SDKs and provides the ability to be responsive to different VR configurations. This enables the creation of experiences that can scale to different requirements, including everything from basic, desktop-based, non-VR environments (for example, using first-person-shooter-style controls) to fully immersive, room-scale, and motion-controlled VR configurations (for the HTC VIVE, Oculus Rift, and Windows Mixed Reality headsets).

Using the WebVR Assets

Getting started couldn’t be easier! From within Unity, launch the Asset Store and search for WebVR to find the WebVR Assets package.

WebVR assets in action porting a Unity game to WebVR.

For full instructions on how to use these assets with your content, check out the Getting Started Guide.

We want to hear from you

We’d love to hear about what you come up with using the WebVR-Assets. Share your work with us by using the #unitywebvr Twitter hashtag.

The Unity WebVR Assets is an open-source project (licensed under Apache 2) available on GitHub:

You are encouraged to:

Reach out to us with any questions you may have or help you may need, and participate in the discussions on the WebVR Slack in the #unity channel.

Credits

This project was heavily influenced by early explorations in using Unity to build for WebVR by @gtk2k.

Also, thanks to @arturitu for creating the 3D-hand models used for controllers in these examples.

Mozilla Reps CommunityReps On-boarding Team

As you already know from our discourse topic, we have created an Onboarding Screening Team.

The scope of this team is to help on evaluating the new applications to the Reps program by helping the Reps Council on this process.

In January we opened a call for the new team members on https://discourse.mozilla.org/t/do-you-want-join-the-onboarding-screening-team/24497

We were happily surprised with the number of applications we’ve got.

After a month the Reps Council has voted and has chosen the new members. Those are:

On behalf of the Reps Council (and part of this team) I want to say thank you to the previous team that worked on 9 rounds of screening with evaluation of 39 applications.

These amazing people are:

The new team will start to work soon (we have about 10 applications in queue) with the help of a Reps Council Member that will be focused on communications between applicants and the evaluation of this team.

If you want to congratulate your fellows Reps you can do it in this thread:

https://discourse.mozilla.org/t/welcome-to-the-new-onboarding-screening-team-2018-1/25665

More updates are coming so stay tuned!

Air MozillaReps Weekly Meeting, 15 Feb 2018

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Mozilla VR BlogCreate VR on the Web using Unity 3D

Create VR on the Web using Unity 3D

We are happy to announce Mozilla's latest tool for creating VR content, Unity 3D WebVR Assets. It is free to download and available now on the Unity Asset Store. This tool allows creators to publish VR experiences created in Unity and shared on the open Web, with the power of a URL or link. These experiences can then be viewed with any WebVR-enabled browser such as Firefox (using the Oculus Rift or HTC VIVE) and Microsoft Edge (using a Windows Mixed Reality headset).

Create VR on the Web using Unity 3D

With the release of this asset package, we hope to bridge the frictionless distribution, ease of use, and accessibility of the Web with the best-in-class content-creation tools from Unity. We believe WebVR is a great fit for demos, marketing, news content, and any case where traditional application flows may be too time-consuming or troublesome for users.

Since the assets utilize the standards-based WebVR API, it removes the need for any platform-specific SDKs and provides the ability to be responsive to different VR configurations. This enables the creation of experiences that can scale to different requirements, including everything from basic, desktop-based, non-VR environments (for example, using First-Person-Shooter-style controls) to fully immersive, room-scale, and motion-controlled VR configurations (for the HTC VIVE, Oculus Rift, and Windows Mixed Reality headsets).

Using the WebVR Assets package

Create VR on the Web using Unity 3D

Getting started couldn’t be easier! From within Unity, launch the Asset Store and search for WebVR to find the WebVR Assets package.

WebVR Assets in action. A Unity game ported to WebVR.


For full instructions on how to use these assets with your content, check out the Getting Started guide.

We want to hear from you!

We’d love to hear about what you come up with using the WebVR-Assets. Share your work with us and use the #unitywebvr Twitter hashtag.

Brought to you by Mozilla, the Unity WebVR Assets is an open-source project (licensed under Apache 2) available on GitHub.

Reach out to us with any questions you may have or help you may need, and participate in the discussions on the WebVR Slack in the #unity channel.

Credits

This project was heavily influenced by early explorations in using Unity to build for WebVR by @gtk2k.

Also, thanks to @arturitu for creating the 3D-hand models used for controllers in these examples.

The Rust Programming Language BlogAnnouncing Rust 1.24

The Rust team is happy to announce a new version of Rust, 1.24.0. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed via rustup, getting Rust 1.24.0 is as easy as:

$ rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.24.0 on GitHub.

What’s in 1.24.0 stable

This release contains two very exciting new features: rustfmt and incremental compilation!

rustfmt

For years now, we’ve wanted a tool that automatically can reformat your Rust code to some sort of “standard style.” With this release, we’re happy to announce that a preview of rustfmt can be used with 1.24 stable. To give it a try, do this:

$ rustup component add rustfmt-preview

There are two important aspects here: first, you’re using rustup component add instead of cargo install here. If you’ve previously used rustfmt via cargo install, you should uninstall it first. Second, this is a preview, as it says in the name. rustfmt is not at 1.0 yet, and some stuff is being tweaked, and bugs are being fixed. Once rustfmt hits 1.0, we’ll be releasing a rustfmt component and deprecating rustfmt-preview.

In the near future, we plan on writing a post about this release strategy, as it’s big enough for its own post, and is broader than just this release.

For more, please check out rustfmt on GitHub.

Incremental compilation

Back in September of 2016 (!!!), we blogged about Incremental Compilation. While that post goes into the details, the idea is basically this: when you’re working on a project, you often compile it, then change something small, then compile again. Historically, the compiler has compiled your entire project, no matter how little you’ve changed the code. The idea with incremental compilation is that you only need to compile the code you’ve actually changed, which means that that second build is faster.

As of Rust 1.24, this is now turned on by default. This means that your builds should get faster! Don’t forget about cargo check when trying to get the lowest possible build times.

This is still not the end story for compiler performance generally, nor incremental compilation specifically. We have a lot more work planned in the future. For example, another change related to performance hit stable this release: codegen-units is now set to 16 by default. One small note about this change: it makes builds faster, but makes the final binary a bit slower. For maximum speed, setting codegen-units to 1 in your Cargo.toml is needed to eke out every last drop of performance.

More to come!

Other good stuff

There’s one other change we’d like to talk about here: undefined behavior. Rust generally strives to minimize undefined behavior, having none of it in safe code, and as little as possible in unsafe code. One area where you could invoke UB is when a panic! goes across an FFI boundary. In other words, this:

extern "C" fn panic_in_ffi() {
    panic!("Test");
}

This cannot work, as the exact mechanism of how panics work would have to be reconciled with how the "C" ABI works, in this example, or any other ABI in other examples.

In Rust 1.24, this code will now abort instead of producing undefined behavior.

See the detailed release notes for more.

Library stabilizations

If you’re a fan of str::find, which is used to find a given char inside of a &str, you’ll be happy to see this pull request: it’s now 10x faster! This is thanks to memchr. [u8]::contains uses it too, though it doesn’t get such an extreme speedup.

Additionally, a few new APIs were stabilized this release:

Finally, these functions may now be used inside a constant expression, for example, to initialize a static:

  • Cell, RefCell, and UnsafeCell’s new functions
  • The new functions of the various Atomic integer types
  • {integer}::min_value and max_value
  • mem’s size_of and align_of
  • ptr::null and null_mut

See the detailed release notes for more.

Cargo features

The big feature of this release was turning on incremental compilation by default, as mentioned above.

See the detailed release notes for more.

Contributors to 1.24.0

Many people came together to create Rust 1.24. We couldn’t have done it without all of you. Thanks!

Mike ConleyFirefox Performance Update #1

In an attempt to fill the shoes of Ehsan’s excellent Quantum Flow Newsletters1, I’ve started to keep track of interesting performance bugs that have been tackled over the past little while.

I don’t expect I’ll be able to put together such excellent essays on performance issues in Firefox, but I can certainly try to help to raise the profile of folks helping to make Firefox faster.

Expect these to come out pretty regularly, especially as we continue to press our performance advantage over the competition. Maybe I’ll come up with a catchy title, too!

Anyhow, here’s the stuff that’s gone by recently that I’m pretty stoked about, performance-wise! To everybody in this list – thanks for making Firefox faster!


  1. Like this one! Check out Ehsan’s blog for the rest of the series. 

Firefox Test PilotWelcome Marnie to the Test Pilot Team!

Late last year, the Test Pilot team welcomed a new engineering program manager, Marnie Pasciuto-Wood. In this post, Marnie talks about what it’s been like joining Mozilla and what keeps her busy and inspired outside of work.

How would you describe your role on the Test Pilot team?

Well, right now I feel like I’m still ramping up. Sometimes I think everything makes sense, but then BLAM, something sneaks up and surprises me. That said, I’m the Engineering Program Manager for the Test Pilot team. I started in mid-November, the day before the Quantum launch, and 3 weeks before the All-Hands in Austin.

If I were to describe my role in two words, I would say “cat wrangler,” where the cat is scheduling, planning, managing workloads, and agile project management for the various projects under the Test Pilot umbrella. I’m not there yet (see above, ramping up), but I’m working my way there.

What does a typical day at Mozilla look like for you?

Meetings! On the Test Pilot team, we have folks all over the world, so the mornings and early afternoons are stacked full of meetings. Towards the latter part of the afternoon, I have down time where I can deal with any tasks I need to accomplish before the next set of meetings start on the following day.

Where were you before Mozilla?

I was at my last company for 7.5 years. Throughout my tenure there, I had various roles: UX Designer, Technical Project Manager, and finally Engineering Program Manager which I loved the most. In that role, I ran the hosted, on premise, and cloud software programs for my company.

On Test Pilot, what are you most looking forward to and why?

I’m most excited about getting new, valuable, tested features into Firefox. I love that we have this platform to interact with our users and gather feedback about potential changes to the browser.

What do you do, outside of work?

I have two kids (15 and 9) that keep me pretty busy. I coach each of their soccer teams in both the spring and the fall. I also help out with their other activities: basketball, lacrosse, cross country, and Girl Scouts. In addition, every week I play goalie for my futsal team. Futsal is similar to soccer, but played indoors and only 5 v. 5. And finally, we’ve just started volunteering at the Oregon Food Bank once a week, which is amazing.

M&Ms or Reese’s Pieces?

Reese’s Pieces…kept in the fridge.

Tell me something most people at Mozilla don’t know about you.

A few months before I joined Mozilla, my family and I spent 5 weeks traveling through western Europe and England. We flew into London and from there visited: Bath, Paris, Barcelona, Munich, Füssen, popped into Austria for 20 minutes, Bamberg, Rothenburg ob der Tauber, Berlin, Brussels, Bruges, Haarlem, and Amsterdam. It was an experience I’ll never forget, and we’re trying to plan another (shorter!) trip to a country we haven’t visited.


Welcome Marnie to the Test Pilot Team! was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Air MozillaThe Joy of Coding - Episode 128

The Joy of Coding - Episode 128 mconley livehacks on real Firefox bugs while thinking aloud.

Daniel PocockWhat is the best online dating site and the best way to use it?

Somebody recently shared this with me, this is what happens when you attempt to access Parship, an online dating site, from the anonymous Tor Browser.

Experian is basically a private spy agency. Their website boasts about how they can:

  • Know who your customers are regardless of channel or device
  • Know where and how to reach your customers with optimal messages
  • Create and deliver exceptional experiences every time

Is that third objective, an "exceptional experience", what you were hoping for with their dating site honey trap? You are out of luck: you are not the customer, you are the product.

When the Berlin wall came down, people were horrified at what they found in the archives of the Stasi. Don't companies like Experian and Facebook gather far more data than this?

So can you succeed with online dating?

There are only three strategies that are worth mentioning:

  • Access sites you can't trust (which includes all dating sites, whether free or paid for) using anonymous services like Tor Browser and anonymous email addresses. Use fake photos and fake all other data. Don't send your real phone number through the messaging or chat facility in any of these sites because they can use that to match your anonymous account to a real identity: instead, get an extra SIM card that you pay for and top-up with cash. One person told me they tried this for a month as an experiment, expediently cutting and pasting a message to each contact to arrange a meeting for coffee. At each date they would give the other person a card that apologized for their completely fake profile photos and offering to start over now they could communicate beyond the prying eyes of the corporation.
  • Join online communities that are not primarily about dating and if a relationship comes naturally, it is a bonus.
  • If you really care about your future partner and don't want your photo to be a piece of bait used to exploit and oppress them, why not expand your real-world activities?

The Mozilla BlogA Perspective: Firefox Quantum’s Tracking Protection Gives Users The Right To Be Curious

In the physical world, we don’t wear our ID on our foreheads. This is convenient because we can walk around with a reasonable expectation of privacy and let our curiosity take us to interesting places. That shoe store you sauntered into because they had a pair that caught your eye has no idea who you are, where you live, or anything about you. More importantly, any attempt by that shoe store to have an employee follow you around would not only be impractical, but would be met with some serious side-eye from potential customers.

In the digital world, this isn’t true. Useful web technologies that make the sites you visit convenient and powerful can also be co-opted to track you wherever you go. The same incredible economies of scale that allow billions of people worldwide to stay connected also allow for the implementation of inexpensive and powerful methods of tracking. The profits from the sale of one pair of shoes allows the online shoe store to track thousands of people in the hopes of turning them into customers.

You would notice a beleaguered shoe store employee following you around, but you’re unlikely to notice most forms of online tracking. We’ve all had the experience where ads magically seem to follow you around, in a practice known as ‘retargeting’, and it’s often unnerving for users. However, the reality is that online tracking is mostly invisible. What’s more is that it’s used to create a profile that ties together as much data as possible in a practice called “cookie syncing” in an effort to predict your habits and preferences, in the hopes that the ads and recommendations you get are more likely to trigger your behavior in a desirable way.

Sometimes, information about you can be helpful. For instance, finding out what the most popular accessories are for your new phone can help you make better decisions about what to buy. Of greater concern is the lack of consent. In the real world, we generally look before we leap, but on the Internet, there’s no way to ‘preview’ the tracking of a site before you click a link. Often without your knowledge, information about you and your visit is compiled into an online profile that can be shared and sold to others without your knowledge.

What’s true for shoes also applies to ideas. Another often overlooked inconvenience is how tracking impacts people’s ability to explore new areas of the web. Against the backdrop of growing online bubbles and polarized media, if all the content you get recommendations for is in the same line of thought, how much are you able to explore what’s across the political line?

With 40% of US internet users saying they have recently used ad blockers, people clearly have an intuitive understanding that trackers and ads can be annoying, but do ad blockers do what they want?

Many in the tech world have been looking into this. When the companies providing the ad blocker are also the world’s biggest advertising networks, will it truly give you the tools to be inconspicuously curious?

Google Chrome’s approach is focused on annoying ads. Its ad blocker blocks ads, but it does nothing against invisible trackers or tracking ads that comply with the standards of the Better Ads Coalition, in which Facebook and Google are key partners. Even Apple’s Intelligent Tracking Protection has a set of rules that favor trackers operated by sites that users visit at least once a day. Unsurprisingly, Google and Facebook are the sites most likely to fall into this category.

If you’re not using Firefox Quantum today and care about your privacy, I encourage you to give Firefox Quantum a try. With Tracking Protection turned on, you’ll get a web that lets you browse freely with fewer worries about pesky trackers, built by an independent organization that doesn’t run an ad network.

The post A Perspective: Firefox Quantum’s Tracking Protection Gives Users The Right To Be Curious appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgCSS Grid for UI Layouts

CSS Grid is a great layout tool for content-driven websites that include long passages of text, and it has tremendous value for a variety of traditional UI layouts as well. In this article I’ll show you how to use CSS Grid to improve application layouts that need to respond and adapt to user interactions and changing conditions, and always have your panels scroll properly.

CSS Grid builds website layouts. It lets web designers create beautiful dynamic layouts using just a tiny bit of supported code instead of the endless float hacks we’ve had to use for years. My friend and co-worker Jen Simmons has been talking about CSS Grid for years, tirelessly pushing to get it implemented in browsers, and her work has paid off. As of the end of last year, the current version of every major browser, desktop and mobile, supports CSS Grid.

CSS Grid really is powerful, and you can build dynamic content driven websites easily, like in these examples. However, Grid is good for more than laying out pretty blocks of content. Grid gives you full control over both dimensions of your layout, including scrolling. This means features we take for granted in native applications like collapsing side-panels and fixed toolbars are now trivial to implement. No more hacks and no more debugging. Grid just works.

I’ve been building web tools for years. Here’s a screenshot of a game building tool I made for my retro RPGs. When Flexbox first appeared I immediately started using it. I built complex layouts using nested horizontal and vertical boxes, with a few utility classes for things like scrolling and stretching.

Flexbox has certainly made me more productive than absolutely positioned divs and float hacks, but it still has problems. Look at this closeup where panels come together. See how the footers on the left and right don’t line up?

Here’s another screenshot. The toolbar is at the top of the drawing canvas, and according to my framework it should be fixed at the top, but the minute you start scrolling this happens. The toolbar disappears:

Each of these problems can be fixed with more positioning and float hacks, but the result is always fragile. Every time I add a new panel I have to debug my layout all over again; searching to identify which div is grabbing the extra space during a resize. And the markup is ugly. The nested horizontal and vertical boxes become very complicated, and this example is only two levels deep. As interaction and functionality become more complex the design becomes even more challenging.

<div class='hbox'>
  <div class='vbox'>
    <div class='hbox'>header</div>
    <div class='scroll'>
      <div class='sidebar'>sidebar</div>
    </div>
    <div class='footer'>footer</div>
  </div>

  <div class=vbox>
    <div class='hbox'>button button 
        spacer label spacer button button </div>
    <div class='center'>main content</div>
    <div class='hbox'>the footer</div>
  </div>

  <div class=vbox>
    <div class=’hbox’>header</div>
    <div class=’scroll’>
      <div class=’sidebar’>sidebar</div>
    </div>
    <div class=’footer’>footer</div>
  </div>
</div>

Entering the Second Dimension

The fundamental problem with Flexbox is that it is one dimensional. This makes Flexbox great for one dimensional uses, like toolbars and navbars, but it begins to fail when I need to align content both horizontally and vertically at the same time. Instead I need real two dimensional layout, which is why I need CSS Grid. Fundamentally Grid is 2D.

Here’s a similar kind of layout built with CSS Grid.

Look closely at the bottom footers. They come together perfectly. And by using the grid-gap for the lines instead of adding borders to each panel, I don’t have to worry about inconsistent grid line widths. Everything just works.

The biggest benefit I get from CSS Grid is adapting to changing conditions. My apps often have side panels. I need to make sure everything in the layout works regardless of whether the panels are expanded or collapsed, ideally without having to recalculate layout in JavaScript. Sidebars are made out of multiple components like headers and footers. All of these need to line up, regardless of which one is larger or smaller.  Grid can do this too using a magic function called minmax().

If you’ve studied CSS Grid before then you know you can define your layout using templates for the rows and columns.  A template like 200px 1fr 200px will give you 200px wide sidebars with a middle content area taking up the rest of the space. But what happens if the panel should collapse? Right now the column would stay at 200px, even though the content has shrunk. Instead we can use minmax with the min-content keyword for the max parameter.

#grid {
  display: grid;
  box-sizing: border-box;
  width: 100vw;
  height: 100vh;
  grid-template-columns: 
      [start] minmax(auto, min-content) 
      [center]1fr 
      [end] minmax(auto,min-content);
  grid-template-rows: 
      [header]2em 
      [content]1fr 
      [footer]2em;
  grid-gap: 1px;
  background-color: black;
}

Now the grid column will be always be just wide enough to hold whatever is in any of the columns using their minimum width. Thus if one part of the column (say the header) is wider than the others, the column will expand to fit them all. If they become skinnier or disappear altogether, then the column will adjust accordingly. Essentially we have replicated the expanding/contracting behavior of Flexbox, but made it work with everything in the column together, not just one item. This is real 2D layout.

Here is the code for the rest of the demo.


.start {
  grid-column: start;
}
.center {
  grid-column: center;
}
.end {
  grid-column: end;
}
header {
  grid-row: header;
}
footer {
  grid-row: footer;
}
.sidebar {
  overflow: auto;
}

<div id="grid">

<header class="start">header</header>
<header class="center">
  <button id="toggle-left">toggle left</button>
...
</header>

<header class="end">header</header>

 
<div class="start sidebar">sidebar</div>
<div class="center content">the center content</div>
<div class="end sidebar">
  sidebar<br/>
...
</div>
 
<footer class="start">left footer</footer>
<footer class="center">center footer</footer>
<footer class="end">right footer</footer>

</div>

To make the toggle buttons in the upper header actually hide the sidebars I added this code. Note that with modern DOM APIs and arrow functions we can essentially replicate JQuery in just a few lines:

const $ = (selector) => document.querySelector(selector)
const $$ = (selector) => document.querySelectorAll(selector)
const on = (elem, type, listener) => elem.addEventListener(type,listener)

on($('#toggle-left'),'click',()=>{
  $$(".start").forEach((elem) => elem.classList.toggle('closed'))
})
on($('#toggle-right'),'click',()=>{
  $$(".end").forEach((elem) => elem.classList.toggle('closed'))
})

Also note that CSS Grid does not deprecate Flexbox. We still use Flexbox in the cases where it makes sense: namely one dimensional content like toolbars. Here are the styles that I’m using for my toolbars made out of headers:



<header class="center">
  <button id="toggle-left">toggle left</button>
  <button>open</button>
  <button>save</button>
  <span class="spacer"></span>
  <span>filename.txt</span>
  <span class="spacer"></span>
  <button>delete</button>
  <button id="toggle-right">toggle right</button>
</header>

header {
  background-color: #ccc;
  display: flex;
  flex-direction: row;
}

.spacer {
  flex: 1;
}

The spacer class makes an element take up all of the extra space. By using two spacers between the buttons I can make my toolbar shrink and grow as needed with the filename always in the middle. This is similar to native toolbars.

You can try out a demo live at this Codepen, then remix it to poke and prod.

See the Pen CSS Grid for UI Layouts by Josh Marinacci (@joshmarinacci) on CodePen.

CSS Grid is wonderful for designing interactive applications with two-dimensional complexity. We can keep the markup semantic. Panels and toolbar line up properly. The grid-gap gives us automatic borders. It adjusts our layout in complex ways without any JavaScript code, and it gives us control over both the horizontal and vertical. And we can do it all without using a heavy CSS framework.

Jen Simmons has started a new YouTube channel, Layout Land to help you grok how Grid works.  If you work on web apps or any kind of richly interactive website, you should try out CSS Grid.

QMOFirefox 59 Beta 10 Testday, February 16th

Greetings Mozillians!

We are happy to let you know that Friday, February 16th, we are organizing Firefox 59 Beta 10 Testday. We’ll be focusing our testing on Find Toolbar and Search Suggestions.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Air MozillaMartes Mozilleros, 13 Feb 2018

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Air MozillaBored and Brilliant: Finding Digital Equilibrium, with Manoush Zomorodi

Bored and Brilliant: Finding Digital Equilibrium, with Manoush Zomorodi “Doing nothing” is vital in an age of constant notifications and digital distractions. Manoush consulted with neuroscientists and cognitive psychologists about the possibilities of “mind...

Mozilla GFXWebRender newsletter #14

Your favorite WebRender newsletter is ready. TL;DR: “fixed […], fixed […], fixed […], Glenn makes things even faster, fixed […], ad-lib”. Still mostly focusing on conformance and stability, although there is some performance work in progress as well.

Without further ado:

Notable WebRender changes

  • Glenn impemented mip-mapping to get high quality down-sampling of large images.
  • Marting cleaned some code up.
  • Martin made some changes to how clip ids are assigned in order to make it possible to reduce the amount of hashing we do and improve performance.
  • Glenn wrote the initial implementation of an image brush shader which allows us to segment some images and improve performance in common cases (when there’s no tiling, spacing and repetition).
  • Lee improved the precision of text shadow with sub-pixel anti-aliasing.
  • Martin improved something about how hit testing and clipping interact (I don’t quite understand the ins and outs of this but it fixed a hit testing issue).
  • Glenn avoided requesting images from the texture cache if they are already present in the pre-render cache (saves wasted work and memory).
  • Kats fixed an issue in the yaml serializer.
  • Lee implemented passing variation dictionaries on CTF font creation.
  • Kvark updated wrench and the sample apps to use an up to date version of glutin instead of an old fork.
  • Glenn implemented partial rendering of off-screen pictures and render tasks.
  • Glenn fixed z-ordering of transformed content inside preserve-3d contexts.
  • Martin made hit testing consistent between frames.
  • Kats avoided rendering frames that are generated only for hit testing.
  • Nical implemented tracking removed pipelines to facilitate managing externally owned memory.
  • eeejay implemented color matrix filters, useful for accessibility and SVG color matrix filters.

Notable Gecko changes

  • Jeff and Gankro enabled blob images by default.
  • Kats enabled WebRender hit testing by default, and it fixed a whole lot of bugs.
  • Lee worked around a sub-pixel glyph positioning issue.
  • Sotaro fixed an issue with windowed plugins such as flash.
  • Sotaro tweaked the swap chain type on Windows, and then fixed an issue with this specific configuration not allowing us to do a graceful compositor fallback when the driver fails.
  • Heftig fixed some text being rendered with the wrong metrics with some OpenType font collections by making sure we pass the proper face index to WebRender.
  • Jamie removed a limitation on the size of recording DrawTargets we use with blob images.
  • Gankro fixed a very bad rendering corruption by ensuring we make the right context current before rendering.
  • Andrew improved the way shared memory is handled to avoid creating many file descriptors (and void running into the limit).
  • Kats fixed a deadlock in the APZ code.
  • Kvark fixed a crash with the capture debugging tool, and another one.
  • Kvark automated including WebRender’s revision hash in the generated captures.
  • Kats fixed an assertion happening when transaction ids don’t increase monotonically (can happen when we reuse a pres context from the back-forward cache).
  • Martin fixed a bug in the way scroll ids are assigned and handled when building the display list for the root scroll frame.
  • Emilio fixed a memory leak.
  • Kats fixed an issue in the frame throttling logic which was messing with hit-testing.
  • Kats auditted the reftests and marked new tests as passing.
  • Lee ensured the ClearType usage setting isn’t ignored when WebRender is enabled.
  • Sotaro fixed a memory leak.

Enabling WebRender in Firefox Nightly

In about:config, just set “gfx.webrender.all” to true and restart the browser.

Note that WebRender can only be enabled in Firefox Nightly.

This Week In RustThis Week in Rust 221

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week sadly had to go without a crate for lack of votes.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

117 pull requests were merged in the last week

New Contributors

  • bobtwinkles
  • Martin Algesten
  • Peter Hrvola
  • Yury Delendik

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

The community team is trying to improve outreach to meetup organisers. Please fill out their call for contact info if you are running or used to run a meetup.

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Michael ComellaGmail Per-label Notifications for FastMail

As someone who tries not to check their email often, I’ve relied on the Gmail Android app to notify me when urgent emails arrive. To do this, I:

  1. Use Gmail filters (through the web interface) to add a label to important messages: notify me
  2. Enable notifications in the Gmail app (Settings -> me@gmail.com -> Notifications)
  3. Disable any “Label notifications” enabled by default (e.g. Inbox; Settings -> me@gmail.com -> Manage labels -> Inbox -> Label notifications)
  4. Enable “label notifications” for notify me

Now I’ll receive notifications on my Android phone for urgent emails, labeled notify me, but not for other emails - great!

FastMail

I recently switched to FastMail, which does not support notifications on a per-folder basis (folders stand in for labels), so I needed a new solution. I found one in FastMail’s Sieve Notify extension.

FastMail uses the Sieve programming language to filter incoming emails. When a user uses the Rules GUI to create these filters, FastMail will generate the necessary Sieve scripts used behind the scenes. However, the GUI is limited so they allow users to write custom sieve code to filter mail in advanced ways.

Their Sieve implementation comes with many extensions, one of which is the Notify extension. Notify, when called from a Sieve script, can launch a notification from the FastMail mobile app with the user-specified custom text. Note that Notify can also be used to forward an email or send a message through twilio, Slack, or IFTTT.

Here are the basic steps to get Notify working:

  1. Write a custom Sieve script (through the web interface) to run Notify on desired emails (Settings -> Rules -> Edit custom sieve code; there will also be auto-generated Sieve code here)
  2. Disable notifications (yes disable!) in the FastMail app: this stops notifications for mail that isn’t using Notify (Device Settings -> Notifications)

Here’s an example Sieve script to open a notification for mail from a@b.com or x@y.xyz, keep the message, and stop futher filtering:

if 
  anyof(
  address :is "From" "a@b.com",
  address :is "From" "x@y.xyz"
  )
 {
  # Notify the app with the specified text
  notify :method "app" :options ["From","Full"] :message "$from$ / $subject$ / $text[500]$";
  
  # Keep the message and don't go through additional filters.
  # Without "keep", "stop" will discard the message so be careful!
  keep;
  stop;
}

See additional details at the Notify documentation here. I paste this script in the bottom-most custom sieve section (below the rules generated from the Organize Rules GUI) though it could be moved anywhere. Once the script is added and notifications are disabled, the FastMail app will send notifications if and only if they match the Notify filters!

This solution comes with some pros:

  • Unlike the alternative solution below, notifications are independent of folders so all mail can end up in the Inbox
  • In theory, it will work automatically when switching between iOS and Android

And cons (on Android, at least):

  • Each email will create a new notification: they don’t batch together
  • Clicking on a notification does not launch the app: it must be opened from the launcher
  • The notifications have no quick actions (e.g. reply, archive, delete)
  • With my Sieve order, other filters will take precedence and may prevent Notify from running. This can be fixed.

An alternative solution

Before finding this solution, I first came up with another: I installed a local mail client that could enable/disable notifications at the folder level (K-9 Mail) and mimicked the Gmail app solution. This was unsatisfactory to me because it required me to check two folders for my new mail - Inbox and notify me - since a FastMail message can only exist in one folder, unlike Gmail labels. This solution also forced me to configure an unfamiliar client (K-9 isn’t simple…) and to trust it with login credentials.

Mozilla Security BlogRestricting AppCache to Secure Contexts

The Application Cache (AppCache) interface provides a caching mechanism that allows websites to run offline. Using this API, developers can specify resources that the browser should cache and make available to users offline. Unfortunately, AppCache has limitations in revalidating its cache, which allows attackers to trick the browser into never revalidate the cache by setting a manifest to a malformed cache file. Removing AppCache over HTTP connections removes the risk that users could see stale cached content that came from a malicious connection indefinitely.

Consider the following attack scenario: A user logs onto a coffee shop WiFi where an attacker can manipulate the WiFi that is served over HTTP. Even if the user only visits one HTTP page over the WiFi, the attacker can plant many insecure iframes using AppCache which allows the attacker to rig the cache with malicious content manipulating all of those sites indefinitely. Even a cautious user who decides only to login to their websites at home is at risk due to this stale cache.

In line with our previous stated intents of deprecating HTTP and requiring HTTPS for all new APIs, we are continuing to remove features from sites served over insecure connections. This means that websites wishing to preserve all their functionality should transition their sites to using TLS encryption as soon as possible.

In Firefox 60+ Beta and Nightly, Application Cache access from HTTP pages is denied. Starting with Firefox 62 Release, Application Cache over HTTP will be fully removed for all release channels. All other browsers have also stated their intent to remove: Chrome, Edge, WebKit. This change will also be reflected in the HTML standard.

Going forward, Firefox will deprecate more APIs over insecure connections in an attempt to increase adoption of HTTPS and improve the safety of the internet as a whole.

The post Restricting AppCache to Secure Contexts appeared first on Mozilla Security Blog.

The Mozilla BlogUpdate: Mozilla Will Re-File Suit Against FCC to Protect Net Neutrality

Protecting net neutrality is core to the internet and crucial for people’s jobs and everyday lives. It is imperative that all internet traffic be treated equally, without discrimination against content or type of traffic — that’s the how the internet was built and what has made it one of the greatest inventions of all time.

What happened?

Last month, Mozilla filed a petition against the Federal Communications Commission for its disappointing decision to overturn the 2015 Open Internet Order because we believe it violates federal law and harms internet users and innovators.

We said that we believed the filing date should be later (while the timing seemed clear in the December 2017 draft order from the FCC, federal law is more ambiguous). We urged the FCC to determine the later date was appropriate, but we filed on January 16 because we are not taking any chances with an issue of this importance.

On Friday, the FCC filed to dismiss this suit and require us to refile after the order has been published in the Federal Register, as we had anticipated.

What’s next?

We will always fight to protect the open internet and will continue to challenge the FCC’s decision to destroy net neutrality in the courts, in Congress, and with our allies and internet users.

The FCC’s decision to destroy net neutrality rules is the result of broken processes, broken politics, and broken policies. It will end the internet as we know it, harm internet users and small businesses, erode free speech, competition, innovation and user choice in the process. In fact, it really only benefits large Internet Service Providers.

We will re-file our suit against the FCC at the appropriate time (10 days after the order is published in the Federal Register).

What can you do?

You can call your elected officials and urge them to support net neutrality and an open internet. Net neutrality is not a partisan or U.S. issue and the decision to remove protections for net neutrality is the result of broken processes, broken politics, and broken policies. We need politicians to decide to protect users and innovation online rather than increase the power of a few large ISPs.

The post Update: Mozilla Will Re-File Suit Against FCC to Protect Net Neutrality appeared first on The Mozilla Blog.

K Lars LohnLars and the Real Internet of Things - Part 2


In part 1 of this missive, I talked about my own checkered history of trying to control devices in my home.   Today I'm going to talk about setting up the Things Gateway software.

Disclaimers and Setting Expectations: The Things Gateway is an experimental proof of concept, not a polished commercial product.  It is aimed at the makers of the technology world, not someone expecting an effortless and flawless plug-and-play experience.  You will encounter glitches and awkward interfaces.  You may even have to interact with the Linux command line.

The Mozilla IoT Gateway is not yet either a functional equivalent or replacement for the commercial products. There are features missing that you will find in commercial products. It is the hope of Mozilla that this project will evolve into a full featured product, but that will take time and effort.

This is where we invite everyone in.  The Things Gateway is open source.  We encourage folks to participate, help add the missing features, help add support for more and more IoT capable things.

Goal: I want to get the Things Gateway by Mozilla up and running on a headless Raspberry Pi.  It will communicate with a smart light bulb and a smart plugin switch using a Zigbee adapter.  The IoT system will be configured for operation exclusively on the local network with no incoming or outgoing communication with the Internet.


Okay, we've had the history, the disclaimers and the goal, let's start controlling things.  If there are terms that I use and you don't know what they mean, look to the Mozilla Iot Glossary of Terms.

To work with the Things Gateway software, you're going to need some hardware.  I'm going to demonstrate using the ZigBee protocol and devices. To follow exactly what I'm going to do, you'll need to acquire the hardware or equivalents in the chart below.  In future articles, I'm going show how to add Z-Wave, Philips Hue, Ikea TRÅDFRI and TPLink hardware. Then I'll get into programming to add support to devices we've not even thought of yet.


ItemWhat's it for?Where I got it
A laptop or desktop PCThis will be used to download the software and create the media that will boot the Raspberry PiI'm going to use my Linux workstation, any PC will do
µSD Card ReaderNeeded only if there is no other way for the Desktop or Laptop to write to a µSD cardI used a Transcend TS-RDF5K that I bought on Amazon years ago
Raspberry Pi Model 3 BThis is the single board computer that will run the Things Gateway. These are available from many vendors like Amazon and Adafruit
5V µUSB Power SupplyThis supplies power to the Raspberry PiI had a spare one lying around, but you can probably get one from the vendor that sold the Raspberry Pi to you.
µSD CardThis will essentially be the Raspberry Pi's hard driveI got mine in the checkout isle of the grocery store, it needs to be at least 4G
DIGI XStickThis allows the Raspberry Pi to talk the ZigBee protocol - there are several models, make sure you get the XU-Z11 model.The only place that I could find this was Mouser Electronics
CREE Connected ZigBee Compatible Light Bulb It's a thing to controlI got one from my local Home Depot
Sylvania SMART+ Plug ZigBee Compatible Appliance SwitchIt's a thing to controlI got this one from Amazon

Step 1: I downloaded the Things Gateway image file by pressing the Download button on the Build Your Own Web of Things page while using my Linux Workstation.  You should use a desktop or laptop. You can be successful using any OS, I just happen to be a Linux geek.

Step 2: I flashed the image onto my µSD card using my µSD Card Reader.  General instructions can be found on the installing images page. Since I'm using a Linux machine, I just used shell based tools like, lsblk and dd.  I had to identify my µSD card by its size.  I knew it would be the smallest device on my machine, so I identified it as the last on the lsblk list: sdh. Be very careful that you select the correct disk.  This is a very sharp knife, choosing the wrong disk could be a disaster.  Proceed at your own risk.
bozeman:~ ᚥ cd Downloads
bozeman:~/Downloads ᚥ unzip gateway-0.3.0.img.zip
bozeman:~/Downloads ᚥ lsblk | grep disk
sda 8:0 0 894.3G 0 disk
sdb 8:16 0 1.8T 0 disk
sdc 8:32 0 2.7T 0 disk
sdd 8:48 0 2.7T 0 disk
sde 8:64 0 2.7T 0 disk
sdf 8:80 0 2.7T 0 disk
sdh 8:112 1 14.4G 0 disk
bozeman:~/Downloads ᚥ sudo dd bs=4M if=gateway-0.3.0.img of=/dev/¿¿¿  # substitute your device name
[sudo] password for lars: **********
Because my Raspberry Pi is headless, having no keyboard or monitor, I need to have a way to communicate with it if something goes wrong. I'm going to enable the ssh server so I can connect to it from another computer.
bozeman:~/Downloads ᚥ mkdir rpi-boot
bozeman:~/Downloads ᚥ sudo mount /dev/¿¿¿1 rpi-boot  # substitute your device name
bozeman:~/Downloads ᚥ sudo touch rpi-boot/ssh
bozeman:~/Downloads ᚥ sudo umount rpi-boot
bozeman:~/Downloads ᚥ rmdir rpi-boot
bozeman:~/Downloads ᚥ

Step 3: I put the newly minted µSD card, the X-Stick and the network cable into my Raspberry Pi and applied power. It took about 45 seconds to boot to the point that the ssh server was working. While I waited, I set up my two test lamps.



Step 4: I'm going to depend on ssh to communicate with the Raspberry Pi, so I need to make sure ssh is secure. We can't let the "pi" account sit with the default password, so we must change it.
bozeman:~/Downloads ᚥ ssh pi@gateway.local
pi@gateway.local's password: raspberry
Linux gateway 4.9.59-v7+ #1047 SMP Sun Oct 29 12:19:23 GMT 2017 armv7l
...
pi@gateway:~ $ passwd
Changing password for pi.
(current) UNIX password:raspberry
Enter new UNIX password: **********
Retype new UNIX password: **********
passwd: password updated successfully
pi@gateway:~ $ exit
logout
Connection to gateway.local closed.
bozeman:~/Downloads ᚥ



Step 5: By this time, the Thing Gateway web server was up and running and I connected to it with Firefox on my workstation.


I'd rather have my Gateway communicate over its Ethernet cable, so I don't want to setup WiFi.  I pressed Skip.


At this point in the setup, the Raspberry Pi is going to reboot.  For me, it took about two minutes before I was able to move on.

Step 6: After the delay, I retyped "gateway.local" into the URL bar to continue with the setup.  In this step, one would normally choose a subdomain so that their Things Gateway would be accessible from the Internet.  I do not intend to use that feature.


Again, I pressed Skip.



Step 6: Next, it registers a username and password.  I'll use this to login to the Things Gateway from my browser on my local network.  Notice that the Firefox URL Bar shows that this Web Site is insecure.  When you type your user name and password, you'll be warned again.

Because we're on our own network, this a tolerable situation for the moment.  We could add a self signed certificate and add a security exception to get rid of the warnings, but for now, I'm going to live with it.


Step 7: The Things Gateway is now up and running.  It shows us that it has not detected any devices yet.  However, before we move on, we can enable some more features in the Zigbee Adapter by updating it.  The simplest way to do that is to delete the Add-on that controls it and immediately reinstalling it.

Go to settings, by clicking the 3 horizontal bar drop-down menu icon in the upper left, selecting Settings, then Add-ons:


Remove the zigbee-adapter add-on:


Then Click the "+" and to add it back in.  This ensures we've got the latest code.  I think this is an awkward way to get an update, hopefully the project will improve that particular piece of UX.

Step 8: Leave settings by backing out using the Left Arrow buttons in the upper left until you're back to the Main Menu:


Select "Things"  Then press the "+" button.


Step 9: For me, it immediately found my two devices: the plug on/off switch and the CREE dimmable light bulb.  I gave them more understandable names, pressed "Save" on each and then "Done".


Step 10: Next I got to enjoy the ability to control lights from my computer for the the first time since the 1990s.  I explored the interface, and made a few rules.


In future editions in this series, I'm going to setup lighting for an old fashioned photo darkroom.  I want an easy way to switch between white and red safety lighting.  So I'll make a rule that will not allow both red and white lights to be on at the same time. This sounds like a perfect use of Philips Hue color changing bulb, eh?

Why do we need lighting for a photo darkroom?  I'll reveal that in a future blog post, too.


Don MartiFOSDEM videos

Check it out. The videos from the Mozilla room at FOSDEM are up, and here's me, talking about bug futures.

All FOSDEM videos

And, yes, the video link Just Works. Bonus link to some background on that: The Fight For Patent-Unencumbered Media Codecs Is Nearly Won by Robert O'Callahan

Another bonus link: FOSDEM video project, including what those custom boxes do.

Dzmitry MalyshauFeasibility of low-level GPU access on the Web

This is a follow up to Defining the Web platform post from a year ago. It doesn’t try to answer questions but instead attempts to figure out what are the right questions to ask.

Trade-offs

As the talks within WebGPU community group progress, it becomes apparent that the disagreements lie in more domains than simply technical. It’s about what the Web is today, and what we want it to become tomorrow. The further we go towards the rabbit hole of low-level features, the more we have to stretch the very definition of the Web:

  1. harder to guarantee security:
    • if the descriptors can be purely GPU driven (like Metal argument buffers), the browser can’t guarantee that they point to owned and initialized GPU memory.
    • if there are many exposed flags for device capabilities, it’s easy for a script to fingerprint users based on that profile.
  2. harder to achieve portability:
    • any use of incoherent memory
    • rich memory types spectrum (like Vulkan memory types/heaps) means that different platforms would have to take different code paths if implemented correctly, and that’s difficult to test
  3. makes the API increasingly more complex to use, which Web developers may not welcome:
    • declaring a Vulkan like render pass with multiple passes is a challenge
  4. finally, all this goes into performance characteristics:
    • potentially lower CPU cost, given the bulk of validation work moved from the driver onto the browser
    • more consistent framerate, given better control of scheduled GPU work

Consistency

It is clear that the native API have reached decent consistency in their respective programming models. Both Metal and Vulkan appear to be positioned at their local maximas, while I can’t confidently state the same about NXT that is unclear:

 /*|     /**\         ?*?
/  | .. /    \ ..... ? ? ?
Metal   Vulkan        NXT

One can’t just take a small step away from an existing API and remain consistent. Finding a whole new local maxima is hard because it means that the previous API designers (Khronos, Microsoft, Apple) either haven’t discovered it or simply discarded it for being inferior. And while we have Microsoft and Apple representatives in the group, we lack the opinion of Vulkan designers from Khronos.

Origins

My first prototype was based on Metal, after which I had a chance to talk a lot about the future of Web GPU access with fellow Mozillians. The feedback I got was very consistent:

  • use existing Vulkan API, making necessary adjustments to provide security and portability
  • provide maximum possible performance by making a platform, let higher level libraries build on it and expose simpler APIs to the users

A similar position was expressed by some of the Vulkan Portability members. And this is where we’ve been going so far with the development of portability layer and experiments with Vulkan-based WebIDL implementation in Servo. Check out the recent talk at Fosdem 2018 as well as our 2017 report for more details. Unfortunately, this direction faces strong opposition from the rest of W3C group.

Portability

Interestingly, there is an existing precedent of providing a less portable Web platform API (quote from an article by Lin Clark):

SharedArrayBuffers could result in race conditions. This makes working with SharedArrayBuffers hard. We don’t expect application developers to use SharedArrayBuffers directly. But library developers who have experience with multithreaded programming in other languages can use these new low-level APIs to create higher-level tools. Then application developers can use these tools without touching SharedArrayBuffers or Atomics directly.

So if the question is “Does Web API have to be portable?”, the answer is a definite “no”. It’s a nice property, but not a requirement, and can be justified by other factors, such as performance. Speaking of which… the case for performance gains in SharedArrayBuffers appears to be strong, which convinced the browser vendors to agree on exposing this low-level API. Now, can we apply the same reasoning to get Vulkan level of explicitness and portability to the Web? Can we rely on user libraries to make it more accessible and portable? Having some performance metrics would be great, but obtaining them appears to be extremely difficult.

Note: currently SharedArrayBuffers are disabled on all browsers due to the high-resolution timers in them discovered to be exploitable. One could argue that this is related to them being low-level, and thus we shouldn’t take them as a good example for a Web API. This is countered by the fact that the disabling is temporary, and the usefulness of SABs is completely clear (see ECMA security issue, also note this comment).

Performance

What would be a good benchmark that shows the cost of memory barriers and types being implicit in the API?

  1. we need to have the same GPU-intensive workloads running on Vulkan and Metal, preferably utilizing multiple command queues and advanced features like multiple passes or secondary command buffers
  2. … on the same hardware, which means Windows installed on a MacBook. Even then, we may see the differences in how OS schedules hardware access, how drivers are implemented, how applications settings are configured on different OSes, etc.
  3. the code paths split between Vulkan/Metal should be fairly high. If it’s done via an API abstraction layer like Vulkan Portability, then the results are obviously going to be skewed towards this API. Splitting at high level means taking most advantage of the native API features, but also means more work for the developer.

Until that benchmarking is done, we can’t reasonably argue in favor of performance when there are concrete sacrifices in portability, API complexity (and security, to an extent) that come with it… Defending Vulkan-like approach goes like this:

  • A: pipeline barriers are confusing, error-prone, and not evidently fast, let’s make them implicit. Look, our native API does that, and it’s successful.
  • M: but… tracking the actual layout and access flags on our side is hard (for the backends that require them), and it gets really difficult when multiple queues are involved that share resources
  • G: let’s only share resources with immutable access flags then, and transition ownership explicitly between queues otherwise
  • A: manually synchronizing submissions between queues is hard to get right anyway, prone to portability issues, etc. Let’s only have a single queue!

And so it goes… one aspect leading to another, proving that the existing APIs are consistent and local maxima, and that Metal is technically easier to fit the shoes of the Web.

Result

Ok, this post is turning into a rant, rather inevitably… Sorry!

The unfortunate part of the story is that the group will not agree on an existing API, no matter what it is, because it would be biased towards a specific platform. And while Mozilla and Apple are at least conceptually aligned to existing API concepts, Google has been actively trying to come up with a new one in NXT. As a result, not only the multi-platform applications would have to add support for yet another API when ported to the Web, that API is also going to be targeted from native via Emscripten and/or WebAssembly, and Google is all into providing the standalone (no-browser) way of using it, effectively adding to the list of native APIs… Without any IHV backing, and promoted by… browser vendors? That is the path the group appears to be moving unconsciously in, instead of building on top of existing knowledge and research.

The current struggle of developing the Web GPU API comes down to many factors. One of the, if not the most, important ones here is that the parties have different end results envisioned. Some would like the JavaScript use to be nice, idiomatic, and error-resistant. Some mostly care about WebAssembly being fast. Some can’t afford a badly written application looking different on different mobile phones. Some can’t expect developers to be smart enough to use a complicated API. Some leave WebGL as a backup choice for a simple and accessible but slower API.

And the fact we are discussing this within W3C doesn’t help either. We don’t have immediate access to Khronos IHV experts and ISV advisory panels. We desperately need feedback from actual target users on what they want. Who knows, maybe the real answer is that we are better off without yet another API at all?

The Mozilla BlogAn Open Letter to Justice Srikrishna About Data Privacy and Aadhaar


Note: This open letter, penned by Mozilla executive chairwoman Mitchell Baker, appears as a full-page advertisement in the February 9 edition of The Hindustan Times. It is co-signed by 1,447 Mozilla India community members. To learn more about Mozilla work regarding India’s data protection law and Aadhaar, visit https://foundation.mozilla.org/campaigns/aadhaar/.


Dear Justice Srikrishna and the Honourable Members of the Ministry of Electronics and Information Technology Committee of Experts,

With the support of and in solidarity with members of Mozilla’s community in India, I write today to urge you to stand up for the privacy and security of all Indians. Your recent consultation on the form of India’s first comprehensive data protection law comes at an auspicious time. The Supreme Court of India has ruled unequivocally that privacy is a fundamental right guaranteed to all Indians by the Indian Constitution. We ask that you take that decision and ensure that right is made a reality in law.

Mozilla’s work on upholding privacy is guided by the Mozilla Manifesto, which states: “Individual security and privacy is fundamental and must not be treated as optional online” (Principle 4). Our commitment to the principle can be seen both in the open source code of our products as well as in our policies such as Mozilla’s Data Privacy Principles. The Mozilla India Community has run numerous campaigns to educate Indians on how to protect themselves online.

Data protection is a critical tool for guaranteeing fundamental rights of privacy. It is particular important today as Aadhaar is being driven deeper into all aspects of life. Digital identity can bring many benefits, but it can also become a surveillance and privacy disaster. A strong data protection law is key to avoiding disaster.

In the digital age, especially in regards to the Aadhaar, individual security and privacy is increasingly being put at risk. Recently, a private citizen was able to buy access to all of the demographic data in the Aadhaar database for just 500 rupees. There have been countless leaks, security incidents, and instances where private Aadhaar data has been published online. Private companies are increasingly requiring Aadhaar in order to use their services. In the vacuum created by India’s lack of a comprehensive data protection law, the Government of India continues its relentless push to make Aadhaar mandatory for ever more government programs and private sector services, in contravention of the directives of the Supreme Court.

We commend you for the strong recommendations and overall framework proposed in your report. While this represents important progress in developing a strong data protection framework, we remain concerned about several missing protections:

  • The current proposal exempts biometric info from the definition of sensitive personal information that must be especially protected. This is backwards, biometric info is some of the most personal info, and can’t be “reset’ like a password.
  • The design of Aadhaar fails to provide meaningful consent to users. This is seen, for example, by the ever increasing number of public and private services that are linked to Aadhaar without users being given a meaningful choice in the matter. This can and should be remedied by stronger consent, data minimization, collection limitation, and purpose limitation obligations.
  • Instead of crafting narrow exemptions for the legitimate needs of law enforcement, you propose to exempt entire agencies from accountability and legal restrictions on how user data may be accessed and processed.
  • Your report also casts doubt on whether individuals should be allowed a right to object over how their data is processed; this is a core pillar of data protection, without a right to object, consent is not meaningful and individual liberty is curtailed.

There is resounding support for privacy in India, and the Supreme Court has made clear that the protection of individual privacy and security is an imperative for the Government of India. We hope you and your colleagues in the Government of India will take this opportunity to develop a data protection law that strongly protects the rights of users and makes India’s framework a model for the world.

Sincerely,

Mitchell Baker, Executive Chairwoman, Mozilla


The Hindustan Times advertisement:

 

The post An Open Letter to Justice Srikrishna About Data Privacy and Aadhaar appeared first on The Mozilla Blog.

Nick FitzgeraldA Wee Allocator for WebAssembly

Recently, we introduced WebAssembly (compiled from Rust) into the source-map JavaScript library. We saw parsing and querying source maps get a whole lot faster. But keeping the compiled .wasm code size small was a challenge.

There are a variety of tools available for removing dead code from WebAssembly and compressing .wasm sizes:

  • wasm-gc constructs the callgraph of all the functions in a .wasm file, determines which functions are never transitively called by an exported function, and removes them.0

  • wasm-snip replaces a WebAssembly function’s body with a single unreachable instruction. This is useful for manually removing functions that will never be called at runtime, but which the compiler and wasm-gc couldn’t statically prove were dead. After snipping a function, other functions that were only called by the snipped function become unreachable, so running wasm-gc again after wasm-snip often yields further code size reductions.

  • wasm-opt runs binaryen‘s sophisticated optimization passes over a .wasm file, both shrinking its size and improving its runtime performance.

After using these tools, we looked at what was left in our .wasm file, and broke the results down by crate:

Half of our code size was coming from dlmalloc, the allocator that Rust uses by default for the wasm32-unknown-unknown target. Source map parsing requires just a couple of large, long-lived allocations, and then does its heavy lifting without allocating any further. Querying a parsed source map doesn’t require any allocation. So we are paying a lot, in code size, for an allocator that we aren’t really using that much.

Allocator implementations have trade offs, but Rust’s default is the wrong choice for the source map parsing and querying scenario.

Introducing wee_alloc

wee_alloc is a work-in-progress memory allocator designed for WebAssembly. It has a tiny code size footprint, compiling down to only a kilobyte of .wasm code.

wee_alloc is designed for scenarios like those described above: where there are a handful of initial, long-lived allocations, after which the code does its heavy lifting without any further allocations. This scenario requires some allocator to exist, but we are more than happy to trade performance for small code size.

In contrast, wee_alloc is not designed for, and would be a poor choice in, scenarios where allocation is a performance bottleneck.

Although WebAssembly is the primary target, wee_alloc also has an mmap-based implementation for unix systems. This enables testing wee_alloc, and using wee_alloc in your own code, without a browser or WebAssembly engine.

How wee_alloc Works

Allocating WebAssembly Pages

WebAssembly module instances have a linear memory space1, and use store and load instructions to access values within it via an index. If an instruction attempts to access the value at an index that is beyond the memory’s bounds, a trap is raised.

There are two instructions for manipulating the linear memory space itself, rather than its contents: current_memory and grow_memory2. The current_memory instruction gives the current size of memory, in units of pages. The grow_memory instruction takes an operand n, grows the memory space by n pages, and gives back the old size of memory, units of pages. Alternatively, if growing memory fails, -1 is returned.

WebAssembly does not have any facilities for shrinking memory, at least right now.

To implement allocating n pages of memory to Rust, we need to use LLVM’s intrinsics. Right now, the intrinsic for grow_memory doesn’t expose the return value, but this should be fixed once Rust updates its LLVM. Therefore, our page allocation routine must use current_memory before grow_memory, when it should just use the result of grow_memory instead. It also means we can’t check for failure yet.

extern "C" {
    #[link_name = "llvm.wasm.current.memory.i32"]
    fn current_memory() -> usize;

    #[link_name = "llvm.wasm.grow.memory.i32"]
    fn grow_memory(pages: usize);
}

unsafe fn get_base_pointer() -> *mut u8 {
    (current_memory() * PAGE_SIZE.0) as *mut u8
}

unsafe fn alloc_pages(n: Pages) -> *mut u8 {
    let ptr = get_base_pointer();
    grow_memory(n.0);
    ptr
}

Careful readers will have noticed the Pages type in alloc_pages’s type signature. wee_alloc uses newtypes for units of bytes, words, and pages. Each of these is a thin wrapper around a usize with relevant operator overloads and inter-conversions. This has been very helpful in catching bugs at compile time, like attempts to offset a pointer by two words rather than two bytes, and compiles away to nothing in the emitted .wasm.

Free Lists

But we don’t satisfy individual allocation requests by directly allocating pages. First, the WebAssembly page size is 64KiB, which is much larger than most allocations. Second, because there is no way to return unused pages to the WebAssembly engine, it would be incredibly wasteful if we didn’t reuse pages. Instead, we maintain a free list of blocks of memory we’ve already allocated from the WebAssembly engine.

Free lists have low complexity and are easy to implement. These properties also lend themselves to a small implementation. The basic idea is to maintain an intrusive linked list of memory blocks that are available. Allocation removes a block from the free list. If the block is larger than the requested allocation size, we can split it in two. Or, if there is no suitable block available in the free list, we can fall back to alloc_pages to get a fresh block of memory. Deallocation reinserts the given block back into the free list, so that it can be reused for future allocations. Because a block is only in the free list if it is not allocated and is therefore unused, we can use a word from the data itself to store the free list links, so long as we ensure that the data is always at least a word in size.

Here is a diagram of what the free list looks like in memory, showing a free block, followed by an allocated block, followed by another free block:


--. .--------------------------------------. ,-----------
  | |                                      | |
  V |                                      V |
+----------------\\-----+---------\\-----+---------------
| free ; data... // ... | data... // ... | free ; data...
+----------------\\-----+---------\\-----+---------------

Even after choosing to use free lists, we have more design choices to make. How should we choose which block to use from the free list? The first that can satisfy this allocation, aka first fit? The block that is closest to the requested allocation size, in an effort to cut down on fragmentation (more on this in a minute), aka best fit? Pick up the search where we left off last time, aka next fit? Regardless which of first fit, best fit, and next fit we choose, we are dealing with an O(n) search. Indeed, this is the downside to the trade off we made when choosing free lists for their simplicity of implementation.

A common technique for speeding up free list allocation is to have separate free lists for allocations of different sizes, which is known as segregated fits or having size classes. With this approach, we can guarantee the invariant that every block in the free list for a particular size can satisfy an allocation request of that size. All we need to do is avoid splitting any block into pieces smaller than that size.

Maintaining this invariant gives us amortized O(1) allocation for these sizes with their own free lists. Using first fit within a size’s free list is guaranteed to only look at the first block within the free list, because the invariant tells us that the first block can satisfy this request. It is amortized because we need to fall back to the O(n) allocation to refill this size’s free list from the fallback, main free list whenever it is empty.

If we reuse the same first fit allocation routine for both our size classes’ free lists and our main free list, then we get the benefits of size classes without paying for them in extra code size.

This is the approach that wee_alloc takes.

Fragmentation

Our other main concern is avoiding fragmentation. Fragmentation is the degree of wasted space between allocations in memory. High fragmentation can lead to situations where there exist many free blocks of memory, but none of which can fulfill some allocation request, because each individual free block’s size is too small, even if the sum of their sizes is more than enough for the requested allocation. Therefore, a high degree of fragmentation can effectively break an allocator. It had one job — allocate memory — and it can’t even do that anymore. So wee_alloc really should have some kind of story here; punting 100% on fragmentation is not a practical option.

Once again there are trade offs, and avoiding fragmentation is not a binary choice. On one end of the spectrum, compacting garbage collectors can re-arrange objects in memory and pack them tightly next to each other, effectively leading to zero fragmentation. The cost that you pay is the size and time overhead of a full tracing garbage collector that can enumerate all pointers in the system and patch them to point to moved objects’ new locations. On the opposite end of the spectrum, if we never re-consolidate two blocks of adjacent memory that we previously split from what had originally been a single contiguous block, then we can expect a lot of fragmentation. As we split blocks into smaller blocks for smaller allocations, we get small bits of wasted space between them, and even after we free all these small allocations, we won’t have any large block in the free list for a large allocation. Even if we have multiple adjacent, small blocks in the free list that could be merged together to satisfy the large allocation.

One possibility is keeping the free list sorted by each block’s address, and then deallocating a block would re-insert it into the free list at the sorted location. If either of its neighbors in the free list are immediately adjacent in memory, we could consolidate them. But then deallocation is an O(n) search through the free list, instead of the O(1) push onto its front. We could lower that to O(log n) by representing the free list as a balanced search tree or btree. But the implementation complexity goes up, and I suspect code size will go up with it.

Instead of a free list, we could use bitmaps to track which portions of our heap are free or allocated, and then the consolidation could happen automatically as bits next to each other are reset. But then we need to restrict our allocator to parceling out portions of a single, contiguous region of memory. This implies that only a single, global allocator exists, since if there were multiple, each instance would want to “own” the end of the WebAssembly linear memory, and have the power to grow it to satisfy more and larger allocations. And maybe this is a fair constraint to impose in the context of WebAssembly, where memory is already linear and contiguous. But lifting this constraint, while still using bitmaps, implies a hybrid free list and bitmap implementation. The downside to that is more implementation complexity, and a larger code size foot print.

wee_alloc takes a third approach: trading some space overhead for easy and fast merging. We maintain a sorted, doubly-linked list of all blocks, whether allocated or free. This adds two words of space overhead to every heap allocation. When freeing a block, we check if either of its adjacent blocks are also free, and if so, merge them together with a handful of updates to the next and previous pointers. If neither of the neighbors are free, then we push this block onto the front of the free list. In this way, we keep both O(1) deallocation and our simple free list implementation.

Here is a diagram of what this sorted, doubly-linked list looks like in memory:


 ,---------------------------------.                           ,---------------------
 |                              ,--|---------------------------|--.
 |  X                           |  |                           |  |
 V  |                           V  |                           V  |
+-----------------------\\-----+-----------------------\\-----+----------------------
| prev ; next ; data... // ... | prev ; next ; data... // ... | prev ; next ; data...
+-----------------------\\-----+-----------------------\\-----+----------------------
          |                     ^        |                     ^        |
          |                     |        |                     |        |
          `---------------------'        `---------------------'        `------------

CellHeader, FreeCell, and AllocatedCell

The CellHeader contains the common data members found within both allocated and free memory blocks: the next and previous doubly-linked list pointers.

#[repr(C)]
struct CellHeader {
    next_cell_raw: ptr::NonNull<CellHeader>,
    prev_cell_raw: *mut CellHeader,
}

We use a low bit of the next_cell_raw pointer to distinguish whether the cell is allocated or free, and can consult its value to dynamically cast to an AllocatedCell or FreeCell.

impl CellHeader {
    const IS_ALLOCATED: usize = 0b01;

    fn is_allocated(&self) -> bool {
        self.next_cell_raw.as_ptr() as usize & Self::IS_ALLOCATED != 0
    }

    fn is_free(&self) -> bool {
        !self.is_allocated()
    }

    fn set_allocated(&mut self) {
        let next = self.next_cell_raw.as_ptr() as usize;
        let next = next | Self::IS_ALLOCATED;
        self.next_cell_raw = unsafe {
            ptr::NonNull::new_unchecked(next as *mut CellHeader)
        };
    }

    fn set_free(&mut self) {
        let next = self.next_cell_raw.as_ptr() as usize;
        let next = next & !Self::IS_ALLOCATED;
        self.next_cell_raw = unsafe {
            ptr::NonNull::new_unchecked(next as *mut CellHeader)
        };
    }

    fn as_free_cell_mut(&mut self) -> Option<&mut FreeCell> {
        if self.is_free() {
            Some(unsafe { mem::transmute(self) })
        } else {
            None
        }
    }
}

We use pointer arithmetic to calculate the size of a given cell’s data to avoid another word of space overhead, so the next_cell_raw pointer must always point just after this cell’s data. But, because of that restriction, we can’t use a null pointer as the sentinel for the end of the doubly-linked-list. Therefore, we use the second low bit of the next_cell_raw pointer to distinguish whether the data pointed to by next_cell_raw (after the appropriate masking) is a valid cell, or is garbage memory.

impl CellHeader {
    const NEXT_CELL_IS_INVALID: usize = 0b10;
    const MASK: usize = !0b11;

    fn next_cell_is_invalid(&self) -> bool {
        let next = self.next_cell_raw.as_ptr() as usize;
        next & Self::NEXT_CELL_IS_INVALID != 0
    }

    fn next_cell_unchecked(&self) -> *mut CellHeader {
        let ptr = self.next_cell_raw.as_ptr() as usize;
        let ptr = ptr & Self::MASK;
        let ptr = ptr as *mut CellHeader;
        ptr
    }

    fn next_cell(&self) -> Option<*mut CellHeader> {
        if self.next_cell_is_invalid() {
            None
        } else {
            Some(self.next_cell_unchecked())
        }
    }

    fn size(&self) -> Bytes {
        let data = unsafe { (self as *const CellHeader).offset(1) };
        let data = data as usize;

        let next = self.next_cell_unchecked();
        let next = next as usize;

        Bytes(next - data)
    }
}

An AllocatedCell is a CellHeader followed by data that is allocated.

#[repr(C)]
struct AllocatedCell {
    header: CellHeader,
}

A FreeCell is a CellHeader followed by data that is not in use, and from which we recycle a word for the next link in the free list.

#[repr(C)]
struct FreeCell {
    header: CellHeader,
    next_free_raw: *mut FreeCell,
}

Each of AllocatedCell and FreeCell have methods that make sense only when the cell is allocated or free, respectively, and maintain the invariants required for cells of their state. For example, the method for transforming a FreeCell into an AllocatedCell ensures that the IS_ALLOCATED bit gets set, and the method for transforming an AllocatedCell into a FreeCell unsets that bit.

impl FreeCell {
    unsafe fn into_allocated_cell(&mut self) -> &mut AllocatedCell {
        self.header.set_allocated();
        mem::transmute(self)
    }
}

impl AllocatedCell {
    unsafe fn into_free_cell(&mut self) -> &mut FreeCell {
        self.header.set_free();
        mem::transmute(self)
    }
}

Implementing Allocation

Let’s begin by looking at first fit allocation without any refilling of the free list in the case where there are no available blocks of memory that can satisfy this allocation request. Given the head of a free list, we search for the first block that can fit the requested allocation. Upon finding a suitable block, we determine whether to split the block in two, or use it as is. If we don’t find a suitable block we return an error.

unsafe fn walk_free_list<F, T>(
    head: &mut *mut FreeCell,
    mut callback: F,
) -> Result<T, ()>
where
    F: FnMut(&mut *mut FreeCell, &mut FreeCell) -> Option<T>,
{
    let mut previous_free = head;

    loop {
        let current_free = *previous_free;

        if current_free.is_null() {
            return Err(());
        }

        let mut current_free = &mut *current_free;

        if let Some(result) = callback(previous_free, current_free) {
            return Ok(result);
        }

        previous_free = &mut current_free.next_free_raw;
    }
}

unsafe fn alloc_first_fit(
    size: Words,
    head: &mut *mut FreeCell,
    policy: &AllocPolicy,
) -> Result<*mut u8, ()> {
    walk_free_list(head, |previous, current| {
        // Check whether this cell is large enough to satisfy this allocation.
        if current.header.size() < size.into() {
            return None;
        }

        // The cell is large enough for this allocation -- maybe *too*
        // large. Try splitting it.
        if let Some(allocated) = current.split_alloc(previous, size, policy) {
            return Some(allocated.data());
        }

        // This cell has crazy Goldilocks levels of "just right". Use it as is,
        // without any splitting.
        *previous = current.next_free();
        let allocated = current.into_allocated_cell(policy);
        Some(allocated.data())
    })
}

Splitting a cell in two occurs when a cell has room for both the requested allocation and for another adjacent cell afterwards that is no smaller than some minimum block size. We use the &AllocPolicy trait object to configure this minimum block size, among other things, for different size classes without the code duplication that monomorphization creates. If there is room to split, then we insert the newly split cell into the free list, remove the current cell, and fixup the doubly-linked list of adjacent cells in the headers.

impl FreeCell {
    fn should_split_for(&self, alloc_size: Words, policy: &AllocPolicy) -> bool {
        let self_size = self.header.size();
        let min_cell_size: Bytes = policy.min_cell_size(alloc_size).into();
        let alloc_size: Bytes = alloc_size.into();
        self_size - alloc_size >= min_cell_size + size_of::<CellHeader>()
    }

    unsafe fn split_alloc(
        &mut self,
        previous: &mut *mut FreeCell,
        alloc_size: Words,
        policy: &AllocPolicy,
    ) -> Option<&mut AllocatedCell> {
        if self.should_split_for(alloc_size, policy) {
            let alloc_size: Bytes = alloc_size.into();

            let remainder = {
                let data = (&mut self.header as *mut CellHeader).offset(1) as *mut u8;
                data.offset(alloc_size.0 as isize)
            };

            let remainder = &mut *FreeCell::from_uninitialized(
                remainder,
                self.header.next_cell_raw,
                Some(&mut self.header),
                Some(self.next_free()),
                policy,
            );

            if let Some(next) = self.header.next_cell() {
                (*next).prev_cell_raw = &mut remainder.header;
            }
            self.header.next_cell_raw =
                ptr::NonNull::new_unchecked(&mut remainder.header);

            *previous = remainder;
            Some(self.into_allocated_cell(policy))
        } else {
            None
        }
    }
}

Refilling a free list when there is not a suitable block already in it is easy. For the main free list, we allocate new pages directly from the WebAssembly engine with the alloc_pages function we defined earlier. For a size class’s free list, we allocate a (relatively) large block from the main free list. This logic is encapsulated in the two different AllocPolicy implementations, and the AllocPolicy::new_cell_for_free_list method.

To allocate with a fallback to refill the free list, we do just that: attempt a first fit allocation, if that fails, refill the free list by pushing a new cell onto its front, and then try a first fit allocation once again.

unsafe fn alloc_with_refill(
    size: Words,
    head: &mut *mut FreeCell,
    policy: &AllocPolicy,
) -> Result<*mut u8, ()> {
    if let Ok(result) = alloc_first_fit(size, head, policy) {
        return Ok(result);
    }

    let cell = policy.new_cell_for_free_list(size)?;
    let head = (*cell).insert_into_free_list(head, policy);
    alloc_first_fit(size, head, policy)
}

But where do we get the free list heads from? The WeeAlloc structure holds the head of the main free list, and if size classes are enabled, the size classes’ free list heads.

pub struct WeeAlloc {
    head: imp::Exclusive<*mut FreeCell>,

    #[cfg(feature = "size_classes")]
    size_classes: SizeClasses,
}

struct SizeClasses(
    [imp::Exclusive<*mut FreeCell>; SizeClasses::NUM_SIZE_CLASSES],
);

impl SizeClasses {
    pub const NUM_SIZE_CLASSES: usize = 256;

    pub fn get(&self, size: Words) -> Option<&imp::Exclusive<*mut FreeCell>> {
        self.0.get(size.0 - 1)
    }
}

As you can see, every free list head is wrapped in an imp::Exclusive.

The imp module contains target-specific implementation code and comes in two flavors: imp_wasm32.rs and imp_unix.rs. The alloc_pages function we saw earlier is defined in imp_wasm32.rs. There is another alloc_pages function that uses mmap inside imp_unix.rs. The imp::Exclusive wrapper type guarantees exclusive access to its inner value. For WebAssembly, this is a no-op, since SharedArrayBuffers aren’t shipping and there is no shared-data threading. For unix systems, this protects the inner value in a pthread mutex, and is similar to std::sync::Mutex but provides a FnOnce interface rather than an RAII guard.

If size classes are not enabled, we always use the main free list head and the LargeAllocPolicy. If size classes are enabled, we try to get the appropriate size class’s free list head, and if that works, then we use the SizeClassAllocPolicy with it. If there is no size class for the requested allocation size, then we fall back to the main free list and the LargeAllocPolicy.

impl WeeAlloc {
    #[cfg(feature = "size_classes")]
    unsafe fn with_free_list_and_policy_for_size<F, T>(&self, size: Words, f: F) -> T
    where
        F: for<'a> FnOnce(&'a mut *mut FreeCell, &'a AllocPolicy) -> T,
    {
        if let Some(head) = self.size_classes.get(size) {
            let policy = size_classes::SizeClassAllocPolicy(&self.head);
            let policy = &policy as &AllocPolicy;
            head.with_exclusive_access(|head| f(head, policy))
        } else {
            let policy = &LARGE_ALLOC_POLICY as &AllocPolicy;
            self.head.with_exclusive_access(|head| f(head, policy))
        }
    }

    #[cfg(not(feature = "size_classes"))]
    unsafe fn with_free_list_and_policy_for_size<F, T>(&self, size: Words, f: F) -> T
    where
        F: for<'a> FnOnce(&'a mut *mut FreeCell, &'a AllocPolicy) -> T,
    {
        let policy = &LARGE_ALLOC_POLICY as &AllocPolicy;
        self.head.with_exclusive_access(|head| f(head, policy))
    }
}

Finally, all that is left is to tie everything together to implement the alloc method for the Alloc trait:

unsafe impl<'a> Alloc for &'a WeeAlloc {
    unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
        if layout.align() > mem::size_of::<usize>() {
            return Err(AllocErr::Unsupported {
                details: "wee_alloc cannot align to more than word alignment",
            });
        }

        let size = Bytes(layout.size());
        if size.0 == 0 {
            return Ok(0x1 as *mut u8);
        }

        let size: Words = size.round_up_to();
        self.with_free_list_and_policy_for_size(size, |head, policy| {
            alloc_with_refill(size, head, policy)
                .map_err(|()| AllocErr::Exhausted { request: layout })
        })
    }

    ...
}

Implementing Deallocation

Deallocation either merges the just-freed block with one of its adjacent neighbors, if they are also free, or it pushes the block onto the front of the free list.

If we are reinserting a block into a size class’s free list, however, it doesn’t make sense to merge blocks. Because the these free lists are always servicing allocations of a single size, we would just end up re-splitting the merged block back exactly as it is split now. There is no benefit to splitting and merging and splitting again. Therefore, we have the AllocPolicy inform us whether merging is desirable or not.

First, let’s examine deallocation without the details of merging. We get the appropriate free list and allocation policy, and conjure up a reference to the AllocatedCell that sits just before the data being freed. Then (assuming we didn’t merge into another block that is already in the free list) we push the block onto the front of the free list.

unsafe impl<'a> Alloc for &'a WeeAlloc {
    ...

    unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) {
        let size = Bytes(layout.size());

        if size.0 == 0 || ptr.is_null() {
            return;
        }

        let size: Words = size.round_up_to();

        self.with_free_list_and_policy_for_size(size, |head, policy| {
            let cell = (ptr as *mut AllocatedCell).offset(-1);
            let cell = &mut *cell;

            let free = cell.into_free_cell(policy);

            if policy.should_merge_adjacent_free_cells() {
                ...
            }

            free.insert_into_free_list(head, policy);
        });
    }
}

When merging cells, the adjacent neighbor(s) we are merging into are also free, and therefore are already inside the free list. Because our free list is singly-linked, rather than doubly-linked, we can’t arbitrarily splice in new elements when we have a handle to an element that is already in the free list. This causes some hiccups.

Merging with the previous adjacent cell is still easy: it is already in the free list, and we aren’t changing the location of the CellHeader, so folding this cell into it is all that needs to be done. The free list can be left alone.

if policy.should_merge_adjacent_free_cells() {
    if let Some(prev) = free.header
        .prev_cell()
        .and_then(|p| (*p).as_free_cell_mut())
    {
        prev.header.next_cell_raw = free.header.next_cell_raw;
        if let Some(next) = free.header.next_cell() {
            (*next).prev_cell_raw = &mut prev.header;
        }
        return;
    }

    ...
}

Merging with the next adjacent cell is a little harder. It is already in the free list, but we need to splice it out from the free list, since its header will become invalid after consolidation, and it is this cell’s header that needs to be in the free list. But, because the free list is singly-linked, we don’t have access to the pointer pointing to the soon-to-be-invalid header, and therefore can’t update that pointer to point to the new cell header. So instead we have a delayed consolidation scheme. We insert this cell just after the next adjacent cell in the free list, and set the next adjacent cell’s NEXT_FREE_CELL_CAN_MERGE bit.

impl FreeCell {
    const NEXT_FREE_CELL_CAN_MERGE: usize = 0b01;
    const _RESERVED: usize = 0b10;
    const MASK: usize = !0b11;

    fn next_free_can_merge(&self) -> bool {
        self.next_free_raw as usize & Self::NEXT_FREE_CELL_CAN_MERGE != 0
    }

    fn set_next_free_can_merge(&mut self) {
        let next_free = self.next_free_raw as usize;
        let next_free = next_free | Self::NEXT_FREE_CELL_CAN_MERGE;
        self.next_free_raw = next_free as *mut FreeCell;
    }

    fn next_free(&self) -> *mut FreeCell {
        let next_free = self.next_free_raw as usize & Self::MASK;
        next_free as *mut FreeCell
    }
}

unsafe impl<'a> Alloc for &'a WeeAlloc {
    ...

    unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) {
        ...

        if policy.should_merge_adjacent_free_cells() {
            ...

            if let Some(next) = free.header
                .next_cell()
                .and_then(|n| (*n).as_free_cell_mut())
            {
                free.next_free_raw = next.next_free();
                next.next_free_raw = free;
                next.set_next_free_can_merge();
                return;
            }
        }

        ...
    }
}

Then, the next time that we walk the free list for allocation, the bit will be checked and the consolidation will happen at that time. Which means that the walk_free_list definition we showed earlier was incomplete, since it didn’t include the code for consolidation. Here is its complete definition:

unsafe fn walk_free_list<F, T>(
    head: &mut *mut FreeCell,
    policy: &AllocPolicy,
    mut f: F,
) -> Result<T, ()>
where
    F: FnMut(&mut *mut FreeCell, &mut FreeCell) -> Option<T>,
{
    let mut previous_free = head;

    loop {
        let current_free = *previous_free;

        if current_free.is_null() {
            return Err(());
        }

        let mut current_free = &mut *current_free;

        if policy.should_merge_adjacent_free_cells() {
            // Now check if this cell can merge with the next cell in the free
            // list. We do this after the initial allocation attempt so that we
            // don't merge, only to immediately split the cell again right
            // afterwards.
            while current_free.next_free_can_merge() {
                let prev_adjacent = current_free.header.prev_cell_raw as *mut FreeCell;
                let prev_adjacent = &mut *prev_adjacent;

                prev_adjacent.header.next_cell_raw = current_free.header.next_cell_raw;
                if let Some(next) = current_free.header.next_cell() {
                    (*next).prev_cell_raw = &mut prev_adjacent.header;
                }

                *previous_free = prev_adjacent;
                current_free = prev_adjacent;
            }
        }

        if let Some(result) = f(previous_free, current_free) {
            return Ok(result);
        }

        previous_free = &mut current_free.next_free_raw;
    }
}

On the other hand, if both the previous and next adjacent cells are free, we are faced with a dilemma. We cannot merge all the previous, current, and next cells together because our singly-linked free list doesn’t allow for that kind of arbitrary appending and splicing in O(1) time. Instead, we use a heuristic to choose whether to merge with the previous or next adjacent cell. We could choose to merge with whichever neighbor cell is smaller or larger, but we don’t. Right now, we prefer the previous adjacent cell because we can greedily consolidate with it immediately, whereas the consolidating with the next adjacent cell must be delayed, as explained above.

If we made the minimum allocation size two words, then we would have room for a doubly-linked free list, and could support consolidating previous, current, and next free cell neighbors. We could also remove the delayed consolidation scheme, which would further simplify a bunch of code. But it would mean effectively three words of overhead for single word heap allocations. It isn’t clear to me whether or not that trade off is worth it or not. To make an informed decision, we’d need a corpus of allocations and frees made by typical WebAssembly applications.

Conclusion

wee_alloc is a work-in-progress prototype, but it already meets its goal of being small.

However, it is certainly lacking in other areas. Sergey Pepyakin tried bootstrapping rustc with wee_alloc enabled as the global allocator, and it is a couple orders of magnitude slower than bootstrapping rustc with its default jemalloc. On the one hand, bootstrapping rustc isn’t exactly the scenario wee_alloc was designed for, and it isn’t surprising that this new, unoptimized prototype is slower than the mature, production-grade jemalloc. Furthermore, I doubt that we can compete with jemalloc on speed without regressing code size. But even so, I think wee_alloc is slower than it should be, and I suspect that there are some low-hanging fruit waiting to be plucked.

I would love, love, love some help building wee_alloc — it’s a lot of fun! Are you interested in code golfing the smallest .wasm binaries? Do you like nitty-gritty profiling and digging into performance? Is wee_alloc missing some obviously-better technique that you’re familiar with? Want to help Rust be the number one choice for compiling to WebAssembly? Fork the wee_alloc repository on GitHub and then come join us in #rust-wasm on irc.mozilla.org and introduce yourself!

Thanks to Mike Cooper and David Keeler for reading early drafts and providing valuable feedback.


0 This will be unnecessary soon, since LLVM’s linker, lld, is gaining support for WebAssembly, and already supports this functionality. The Rust toolchain will also start using lld and then we won’t need wasm-gc anymore.

1 Right now there is a single linear memory, but it is expected that more address spaces will come. They would be useful for, for example, referencing garbage-collected JavaScript or DOM objects directly from WebAssembly code.

2 The current_memory and grow_memory instructions will likely be renamed to mem.size and mem.grow.

Ryan HarterAsking Questions

Will posted a great article a couple weeks ago, Giving and Receiving Help at Mozilla. I have been meaning to write a similar article for a while now. His post finally pushed me over the edge.

Be sure to read Will's post first. The rest of this article is an addendum to his post.

Avoid Context Free Pings

Context free pings should be considered harmful. These are pings like ping or hey. The problem with context free pings are documented elsewhere (1, 2, 3) so I won't discuss them here.

Pings are Ephemeral

IRC and Slack are nice because they generate notifications. If you need a quick response, IRC or Slack are the way to go. I get Slack and IRC notifications on my phone, so I'm likely to respond quickly. On the other hand, these notifications disappear easily, which makes it easy for me to lose your message. If you don't hear from me immediately, it's a good idea to send an email.

Otherwise, I don't mind pings at all. Some folks worry about creating interruptions, but this isn't a problem for me. I limit the notifications I get so if I don't want to get your notification, I won't. If I'm looking at Slack, I'm already distracted.

In short, consider these rules of thumb:

  • If it will take me less than 2m to respond to you and it's urgent, ping me
  • If it will take me more than 2m to respond to you and it's urgent, file a bug and ping me
  • If it's not urgent just email me

Prefer Open Channels

I've spent a lot of time on documentation at Mozilla. It's hard. Our tools are constantly under development and our needs are always changing so our documentation needs constant work. Asking questions in the open reduces our documentation burden.

Email is where information goes to die. If we discuss a problem in a bug, that conversation is open and discoverable. It's not always useful, but it's a huge win when it is. File a bug instead of writing an email. @mention me in on #fx-metrics instead of PM-ing me. CC an open mailing list if you need to use email.

Andy McKayAlternatives to vertical tabs

For the longest time I've used vertical tabs in Firefox and I still find it odd that people don't use it more. It's a simple fact that a horizontal tab strip doesn't scale too well when you get lots of tabs.

Of course, for most users this isn't a problem, most users do not have a lot of tabs open according to Firefox telemetry:

But if you have a few the title just gets squished and squished till it becomes a scroll bar. Vertical tabs look great for this giving you lots of title space on a wide monitor:

Firefox is way better at this than Chrome. I sat next to someone who had nothing but triangles as their tab bar in Chrome. How did they cope?

With the landing of the tab hiding API in WebExtensions in Firefox 59, I wanted to try and understand what the many people who were clamouring for this API wanted to do. So I wrote a quick extension that's pretty terrible. It provided a "Hide this tab" context menu item on the tab to hide the tab. I then added a quick management page to list all the hidden pages.

That was ok, but clicking that menu item was tedious. So then I set it to just perform some actions for me. I've now got it set up to hide a tab if it hasn't been looked it for an hour. Then five hours after that, if I haven't opened it again, the extension just closes the tab.

I tried that for a week and found it pretty useful. Tabs that are hidden still show up in the awesome bar and as soon as I click on them, they come back instantly. Eventually they'll get closed. They'll still appear in the awesome bar and I can bring them back, just in a slower manner.

If I find myself saying "where was that tab..." I just go to the management view and its likely there.

This extension isn't perfect, but its enabled me to stop using vertical tabs most of the time and now I'm torn which workflow is better. Maybe some combination.

Niko MatsakisMaximally minimal specialization: always applicable impls

So aturon wrote this beautiful post about what a good week it has been. In there, they wrote:

Breakthrough #2: @nikomatsakis had a eureka moment and figured out a path to make specialization sound, while still supporting its most important use cases (blog post forthcoming!). Again, this suddenly puts specialization on the map for Rust Epoch 2018.

Sheesh I wish they hadn’t written that! Now the pressure is on. Well, here goes nothing =).

Anyway, I’ve been thinking about the upcoming Rust Epoch. We’ve been iterating over the final list of features to be included and I think it seems pretty exciting. But there is one “fancy type system” feature that’s been languishing for some time: specialization. Accepted to much fanfare as RFC 1210, we’ve been kind of stuck since then trying to figure out how to solve an underlying soundness challenge.

As aturon wrote, I think (and emphasis on think!) I may have a solution. I call it the always applicable rule, but you might also call it maximally minimal specialization1.

Let’s be clear: this proposal does not support all the specialization use cases originally envisioned. As the phrase maximally minimal suggests, it works by focusing on a core set of impls and accepting those. But that’s better than most of its competitors! =) Better still, it leaves a route for future expansion.

The soundness problem

I’ll just cover the soundness problem very briefly; Aaron wrote an excellent blog post that covers the details. The crux of the problem is that code generation wants to erase regions, but the type checker doesn’t. This means that we can write specialization impls that depend on details of lifetimes, but we have no way to test at code generation time if those more specialized impls apply. A very simple example would be something like this:

impl<T> Trait for T { }
impl Trait for &'static str { }

At code generation time, all we know is that we have a &str – for some lifetime. We don’t know if it’s a static lifetime or not. The type checker is supposed to have assured us that we don’t have to know – that this lifetime is “big enough” to cover all the uses of the string.

My proposal would reject the specializing impl above. I basically aim to solve this problem by guaranteeing that, just as today, code generation doesn’t have to care about specific lifetimes, because it knows that – whatever they are – if there is a potentially specializing impl, it will be applicable.

The “always applicable” test

The core idea is to change the rule for when overlap is allowed. In RFC 1210 the rule is something like this:

  • Distinct impls A and B are allowed to overlap if one of them specializes the other.

We have long intended to extend this via the idea of intersection impls, giving rise to a rule like:

  • Two distinct impls A and B are allowed to overlap if, for all types in their intersection:
    • there exists an applicable impl C and C specializes both A and B.2

My proposal is to extend that intersection rule with the always applicable test. I’m actually going to start with a simple version, and then I’ll discuss an important extension that makes it much more expressive.

  • Two distinct impls A and B are allowed to overlap if, for all types in their intersection:
    • there exists an applicable impl C and C specializes both A and B,
    • and that impl C is always applicable.

(We will see, by the way, that the precise definition of the specializes predicate doesn’t matter much for the purposes of my proposal here – any partial order will do.)

When is an impl always applicable?

Intuitively, an impl is always applicable if it does not impose any additional conditions on its input types beyond that they be well-formed – and in particular it doesn’t impose any equality constraints between parts of its input types. It also has to be fully generic with respect to the lifetimes involved.

Actually, I think the best way to explain it is in terms of the implied bounds proposal3 (RFC, blog post). The idea is roughly this: an impl is always applicable if it meets three conditions:

  • it relies only on implied bounds,
  • it is fully generic with respect to lifetimes,
  • it doesn’t repeat generic type parameters.

Let’s look at those three conditions.

Condition 1: Relies only on implied bounds.

Here is an example of an always applicable impl (which could therefore be used to specialize another impl):

struct Foo<T: Clone> { }

impl<T> SomeTrait for Foo<T> { 
  // code in here can assume that `T: Clone` because of implied bounds
}

Here the impl works fine, because it adds no additional bounds beyond the T: Clone that is implied by the struct declaration.

If the impl adds new bounds that are not part of the struct, however, then it is not always applicable:

struct Foo<T: Clone> { }

impl<T: Copy> SomeTrait for Foo<T> { 
  // ^^^^^^^ new bound not declared on `Foo`,
  //         hence *not* always applicable
}

Condition 2: Fully generic with respect to lifetimes.

Each lifetime used in the impl header must be a lifetime parameter, and each lifetime parameter can only be used once. So an impl like this is always applicable:

impl<'a, 'b> SomeTrait for &'a &'b u32 {
  // implied bounds let us assume that `'b: 'a`, as well
}

But the following impls are not always applicable:

impl<'a> SomeTrait for &'a &'a u32 {
                   //  ^^^^^^^ same lifetime used twice
}

impl SomeTrait for &'static str {
                //  ^^^^^^^ not a lifetime parmeter
}

Condition 3: Each type parameter can only be used once.

Using a type parameter more than once imposes “hidden” equality constraints between parts of the input types which in turn can lead to equality constraints between lifetimes. Therefore, an always applicable impl must use each type parameter only once, like this:

impl<T, U> SomeTrait for (T, U) {
}

Repeating, as here, means the impl cannot be used to specialize:

impl<T> SomeTrait for (T, T) {
  //                   ^^^^
  // `T` used twice: not always applicable
}

How can we think about this formally?

For each impl, we can create a Chalk goal that is provable if it is always applicable. I’ll define this here “by example”. Let’s consider a variant of the first example we saw:

struct Foo<T: Clone> { }

impl<T: Clone> SomeTrait for Foo<T> { 
}

As we saw before, this impl is always applicable, because the T: Clone where clause on the impl follows from the implied bounds of Foo<T>.

The recipe to transform this into a predicate is that we want to replace each use of a type/region parameter in the input types with a universally quantified type/region (note that the two uses of the same type parameter would be replaced with two distinct types). This yields a “skolemized” set of input types T. When check if the impl could be applied to T.

In the case of our example, that means we would be trying to prove something like this:

// For each *use* of a type parameter or region in
// the input types, we add a 'forall' variable here.
// In this example, the only spot is `Foo<_>`, so we
// have one:
forall<A> {
  // We can assume that each of the input types (using those
  // forall variables) are well-formed:
  if (WellFormed(Foo<A>)) {
    // Now we have to see if the impl matches. To start,
    // we create existential variables for each of the
    // impl's generic parameters:
    exists<T> {
      // The types in the impl header must be equal...
      Foo<T> = Foo<A>,
      // ...and the where clauses on the impl must be provable.
      T: Clone,
    }
  }
} 

Clearly, this is provable: we infer that T = A, and then we can prove that A: Clone because it follows from WellFormed(Foo<A>). Now if we look at the second example, which added T: Copy to the impl, we can see why we get an error. Here was the example:

struct Foo<T: Clone> { }

impl<T: Copy> SomeTrait for Foo<T> { 
  // ^^^^^^^ new bound not declared on `Foo`,
  //         hence *not* always applicable
}

That example results in a query like:

forall<A> {
  if (WellFormed(Foo<A>)) {
    exists<T> {
      Foo<T> = Foo<A>,
      T: Copy, // <-- Not provable! 
    }
  }
} 

In this case, we fail to prove T: Copy, because it does not follow from WellFormed(Foo<A>).

As one last example, let’s look at the impl that repeats a type parameter:

impl<T> SomeTrait for (T, T) {
  // Not always applicable
}

The query that will result follows; what is interesting here is that the type (T, T) results in two forall variables, because it has two distinct uses of a type parameter (it just happens to be one parameter used twice):

forall<A, B> {
  if (WellFormed((A, B))) {
    exists<T> {
      (T, T) = (A, B) // <-- cannot be proven
    }
  }
} 

What is accepted?

What this rule primarily does it allow you to specialize blanket impls with concrete types. For example, we currently have a From impl that says any type T can be converted to itself:

impl<T> From<T> for T { .. }

It would be nice to be able to define an impl that allows a value of the never type ! to be converted into any type (since such a value cannot exist in practice:

impl<T> From<!> for T { .. }

However, this impl overlaps with the reflexive impl. Therefore, we’d like to be able to provide an intersection impl defining what happens when you convert ! to ! specifically:

impl From<!> for ! { .. }

All of these impls would be legal in this proposal.

Extension: Refining always applicable impls to consider the base impl

While it accepts some things, the always applicable rule can also be quite restrictive. For example, consider this pair of impls:

// Base impl:
impl<T> SomeTrait for T where T: 'static { }
// Specializing impl:
impl SomeTrait for &'static str { }

Here, the second impl wants to specialize the first, but it is not always applicable, because it specifies the 'static lifetime. And yet, it feels like this should be ok, since the base impl only applies to 'static things.

We can make this notion more formal by expanding the property to say that the specializing impl C must be always applicable with respect to the base impls. In this extended version of the predicate, the impl C is allowed to rely not only on the implied bounds, but on the bounds that appear in the base impl(s).

So, the impls above might result in a Chalk predicate like:

// One use of a lifetime in the specializing impl (`'static`),
// so we introduce one 'forall' lifetime:
forall<'a> {
  // Assuming the base impl applies:
  if (exists<T> { T = &'a str, T: 'static }) {
      // We have to prove that the
      // specialized impls type's can unify:
      &'a str = &'static str
    }
  }
} 

As it happens, the compiler today has logic that would let us deduce that, because we know that &'a str: 'static, then we know that 'a = 'static, and hence we could solve this clause successfully.

This rule also allows us to accept some cases where type parameters are repeated, though we’d have to upgrade chalk’s capability to let it prove those predicates fully. Consider this pair of impls from RFC 1210:

// Base impl:
impl<E, T> Extend<E, T> for Vec<E> where T: IntoIterator<Item=E> {..}
// Specializing impl:
impl<'a, E> Extend<E, &'a [E]> for Vec<E> {..}
               //  ^       ^           ^ E repeated three times!

Here the specializing impl repeats the type parameter E three times! However, looking at the base impl, we can see that all of those repeats follow from the conditions on the base impl. The resulting chalk predicate would be:

// The fully general form of specializing impl is
// > impl<A,'b,C,D> Extend<A, &'b [C]> for Vec<D>
forall<A, 'b, C, D> {
  // Assuming the base impl applies:
  if (exists<E, T> { E = A, T = &'b [B], Vec<D> = Vec<E>, T: IntoIterator<Item=E> }) {
    // Can we prove the specializing impl unifications?
    exists<'a, E> {
      E = A,
      &'a [E] = &'b [C],
      Vec<E> = Vec<D>,
    }
  }
} 

This predicate should be provable – but there is a definite catch. At the moment, these kinds of predicates fall outside the “Hereditary Harrop” (HH) predicates that Chalk can handle. HH predicates do not permit existential quantification and equality predicates as hypotheses (i.e., in an if (C) { ... }). I can however imagine some quick-n-dirty extensions that would cover these particular cases, and of course there are more powerful proving techniques out there that we could tinker with (though I might prefer to avoid that).

Extension: Reverse implied bounds rules

While the previous examples ought to be provable, there are some other cases that won’t work out without some further extension to Rust. Consider this pair of impls:

impl<T> Foo for T where T: Clone { }
impl<T> Foo for Vec<T> where T: Clone { }

Can we consider this second impl to be always applicable relative to the first? Effectively this boils down to asking whether knowing Vec<T>: Clone allows us to deduce that T: Clone – and right now, we can’t know that. The problem is that the impls we have only go one way. That is, given the following impl:

impl<T> Clone for Vec<T> where T: Clone { .. }

we get a program clause like

forall<T> {
  (Vec<T>: Clone) :- (T: Clone)
}

but we need the reverse:

forall<T> {
  (T: Clone) :- (Vec<T>: Clone)
}

This is basically an extension of implied bounds; but we’d have to be careful. If we just create those reverse rules for every impl, then it would mean that removing a bound from an impl is a breaking change, and that’d be a shame.

We could address this in a few ways. The most obvious is that we might permit people to annotate impls indicating that they represent minimal conditions (i.e., that removing a bound is a breaking change).

Alternatively, I feel like there is some sort of feature “waiting” out there that lets us make richer promises about what sorts of trait impls we might write in the future: this would be helpful also to coherence, since knowing what impls will not be written lets us permit more things in downstream crates. (For example, it’d be useful to know that Vec<T> will never be Copy.)

Extension: Designating traits as “specialization predicates”

However, even when we consider the base impl, and even if we have some solution to reverse rules, we still can’t cover the use case of having “overlapping blanket impls”, like these two:

impl<T> Skip for T where T: Read { .. }
impl<T> Skip for T where T: Read + Seek { .. }

Here we have a trait Skip that (presumably) lets us skip forward in a file. We can supply one default implementation that works for any reader, but it’s inefficient: it would just read and discard N bytes. It’d be nice if we could provide a more efficient version for those readers that implement Seek. Unfortunately, this second impl is not always applicable with respect to the first impl – it adds a new requirement, T: Seek, that does not follow from the bounds on the first impl nor the implied bounds.

You might wonder why this is problematic in the first place. The danger is that some other crate might have an impl for Seek that places lifetime constraints, such as:

impl Seek for &'static Foo { }

Now at code generation time, we won’t be able to tell if that impl applies, since we’ll have erased the precise region.

However, what we could do is allow the Seek trait to be designated as a specialization predicate (perhaps with an attribute like #[specialization_predicate]). Traits marked as specialization predicates would be limited so that every one of their impls must be always applicable (our original predicate). This basically means that, e.g., a “reader” cannot conditionally implement Seek – it has to be always seekable, or never. When determining whether an impl is always applicable, we can ignore where clauses that pertain to #[specialization_predicate] traits.

Adding a #[specialization_predicate] attribute to an existing trait would be a breaking change; removing it would be one too. However, it would be possible to take existing traits and add “specialization predicate” subtraits. For example, if the Seek trait already existed, we might do this:

impl<T> Skip for T where T: Read { .. }
impl<T> Skip for T where T: Read + SeekPredicate { .. }

#[specialization_predicate]
trait UnconditionalSeek: Seek {
  fn seek_predicate(&self, n: usize) {
    self.seek(n);
  }
}

Now streams that implement seek unconditionally (probably all of them) can add impl UnconditionalSeek for MyStream { } and get the optimization. Not as automatic as we might like, but could be worse.

Default impls need not be always applicable

This last example illustrates an interesting point. RFC 1210 described not only specialization but also a more flexible form of defaults that go beyond default methods in trait definitions. The idea was that you can define lots of defaults using a default impl. So the UnconditionalSeek trait at the end of the last section might also have been expressed:

#[specialization_predicate]
trait UnconditionalSeek: Seek {
}

default impl<T: Seek> UnconditionalSeek for T {
  fn seek_predicate(&self, n: usize) {
    self.seek(n);
  }
}

The interesting thing about default impls is that they are not (yet) a full impl. They only represent default methods that real impls can draw upon, but users still have to write a real impl somewhere. This means that they can be exempt from the rules about being always applicable – those rules will be enforced at the real impl point. Note for example that the default impl above is not always available, as it depends on Seek, which is not an implied bound anywhere.

Conclusion

I’ve presented a refinement of specialization in which we impose one extra condition on the specializing impl: not only must it be a subset of the base impl(s) that it specializes, it must be always applicable, which means basically that if we are given a set of types T where we know:

  • the base impl was proven by the type checker to apply to T
  • the types T were proven by the type checker to be well-formed
  • and the specialized impl unifies with the lifetime-erased versions of T

then we know that the specialized impl applies.

The beauty of this approach compared with past approaches is that it preserves the existing role of the type checker and the code generator. As today in Rust, the type checker always knows the full region details, but the code generator can just ignore them, and still be assured that all region data will be valid when it is accessed.

This implies for example that we don’t need to impose the restrictions that aturon discussed in their blog post: we can allow specialized associated types to be resolved in full by the type checker as long as they are not marked default, because there is no danger that the type checker and trans will come to different conclusions.

Thoughts?

I’ve opened an internals thread on this post. I’d love to hear whether you see a problem with this approach. I’d also like to hear about use cases that you have for specialization that you think may not fit into this approach.

Footnotes

  1. We don’t say it so much anymore, but in the olden days of Rust, the phrase “max min” was very “en vogue”; I think we picked it up from some ES6 proposals about the class syntax.

  2. Note: an impl is said to specialize itself.

  3. Let me give a shout out here to scalexm, who recently emerged with an elegant solution for how to model implied bounds in Chalk.

Air MozillaBay Area Rust Meetup February 2018

Bay Area Rust Meetup February 2018  - Matthew Fornaciari from Gremlin talking about a version of Chaos Monkey in Rust - George Morgan from Flipper talking about their embedded Rust...

Karl DubostPour holy Web Compatibility in your CSS font

Yet another Webcompat issue with the characters being cut in the bottom, this will join the other ones, such as cross characters not well centered in a rounded box and many other cases. What about it?

The sans-serif issue

All of these have the same pattern. They rely on the intrinsic font features to get the right design. So… this morning was another of this case. Take this very simple CSS rule:

.gsc-control-cse, .gsc-control-cse .gsc-table-result {
    width: 100%;
    font-family: Arial, sans-serif;
    font-size: 13px;
}

Nothing fancy about it. It includes Arial, a widely used font and it gives a sans-serif fallback. It seems to be a sound and fail-safe choice.

Well… meet the land of mobile and your font declaration doesn't seem to be that reliable anymore. Mobile browsers have different default fonts on Android.

The sans-serif doesn't mean the same thing in all browsers on the same OS.

For example, for sans-serif and western languages

  • Chrome: Roboto
  • Firefox: Clear Sans

If you use Chinese or Japanese characters, the default will be different.

Fix The Users Woes On Mobile

Why is it happening so often? Same story, the web developers didn't have time, budget to test on all browsers. They probably tested on Chrome and Safari (iOS) and they decided to make a pass on Firefox Android. And because fonts have different features, they do not behave the same to line-height, box sizes and so on. Clear Sans and Roboto are different enough that it creates breakage on some sites.

If you test only on Chrome Android (you should not), but let says we reached the shores of Friday… and it's time to deploy at 5pm. This is your fix:

.gsc-control-cse, .gsc-control-cse .gsc-table-result {
    width: 100%;
    font-family: Arial, Roboto, sans-serif;
    font-size: 13px;
}

Name the fonts available on mobile OS, you expect the design to be working on. It's still not universally accessible and will not make it reliable in all cases, but it will cover a lot of cases. It will also make your Firefox Android users less grumpy and your Mondays will be brighter.

Otsukare!

Hacks.Mozilla.OrgCreating an Add-on for the Project Things Gateway

The Project Things Gateway exists as a platform to bring all of your IoT devices together under a unified umbrella, using a standardized HTTP-based API. Currently, the platform only has support for a limited number of devices, and we need your help expanding our reach! It is fairly straightforward to add support for new devices, and we will walk you through how to do so. The best part: you can use whatever programming language you’d like!

High-Level Concepts

Add-on

An Add-on is a collection of code that the Gateway runs to gain a new features, usually a new adapter. This is loosely modeled after the add-on system in Firefox where each add-on adds to the functionality of your Gateway in new and exciting ways.

Adapter

An Adapter is an object that manages communication with a device or set of devices. This could be very granular, such as one adapter object communicating with one GPIO pin, or it could be much more broad, such as one adapter communicating with any number of devices over WiFi. You decide!

Device

A Device is just that, a hardware device, such as a smart plug, light bulb, or temperature sensor.

Property

A Property is an individual property of a device, such as its on/off state, its energy usage, or its color.

Supported Languages

Add-ons have been written in Node.js, Python, and Rust so far, and official JavaScript and Python bindings are available on the gateway platform. If you want to skip ahead, you can check out the list of examples now. However, you are free to develop an add-on in whatever language you choose, provided the following:

  • Your add-on is properly packaged.
  • Your add-on package bundles all required dependencies that do not already exist on the gateway platform.
  • If your package contains any compiled binaries, they must be compiled for the armv6l architecture. All Raspberry Pi families are compatible with this architecture. The easiest way to do this would be to build your package on a Raspberry Pi 1/2/Zero.

Implementation: The Nitty Gritty

Evaluate Your Target Device

First, you need to think about the device(s) you’re trying to target.

  • Will your add-on be communicating with one or many devices?
  • How will the add-on communicate with the device(s)? Is a separate hardware dongle required?
    • For example, the Zigbee and Z-Wave adapters require a separate USB dongle to communicate with devices.
  • What properties do these devices have?
  • Is there an existing Thing type that you can advertise?
  • Are there existing libraries you can use to talk to your device?
    • You’d be surprised by how many NPM modules, Python modules, C/C++ libraries, etc. exist for communicating with IoT devices.

The key here is to gain a strong understanding of the devices you’re trying to support.

Start from an Example

The easiest way to start development is to start with one of the existing add-ons (listed further down). You can download, copy and paste, or git clone one of them into:

/home/pi/mozilla-iot/gateway/build/addons/

Alternatively, you can do your development on a different machine. Just make sure you test on the Raspberry Pi.

After doing so, you should edit the package.json file as appropriate. In particular, the name field needs to match the name of the directory you just created.

Next, begin to edit the code. The key parts of the add-on lifecycle are device creation and property updates. Device creation typically happens as part of a discovery process, whether that’s through SSDP, probing serial devices, or something else. After discovering devices, you need to build up their property lists, and make sure you handle property changes (that could be through events you get, or you may have to poll your devices). You also need to handle property updates from the user.

Restart the gateway process to test your changes:

$ sudo systemctl restart mozilla-iot-gateway.service

Test your add-on thoroughly. You can enable it through the Settings->Add-ons menu in the UI.

Get Your Add-on Published!

Run ./package.sh or whatever else you have to do to package up your add-on. Host the package somewhere, i.e. on Github as a release. Then, submit a pull request or issues to the addon-list repository.

Notes

  • Your add-on will run in a separate process and communicate with the gateway process via nanomsg IPC. That should hopefully be irrelevant to you.
  • If your add-on process dies, it will automatically be restarted.

Examples

The Project Things team has built several add-ons that can serve as a good starting point and reference.

Node.js:

Python:

Rust:

References

Additional documentation, API references, etc., can be found here:

Find a bug in some of our software? Let us know! We’d love to have issues, or better yet, pull requests, filed to the appropriate Github repo.

Shing LyuMinimal React.js Without A Build Step (Updated)

Back in 2016, I wrote a post about how to write a React.js page without a build step. If I remember correctly, at that time the official React.js site have very little information about running React.js without [Webpack][webpack], [in-browser Babel transpiler][babel] is not very stable and they are deprecating JSXTransformer.js. After the post my focus turned to browser backend projects and I haven’t touch React.js for a while. Now after 1.5 years, when I try to update one of [my React.js project][itinerary-viewer], I notice that the official site now has a clearer instruction on how to use React.js without a build step. So I’m going to write an update the post here.

You can find the example code on GitHub.

1. Load React.js from CDN instead of npm

You can use the official minimal HTML template here. The most crucial bit is the importing of scripts:

<script src="https://unpkg.com/react@16/umd/react.development.js"></script>
<script src="https://unpkg.com/react-dom@16/umd/react-dom.development.js"></script>
<script src="https://unpkg.com/babel-standalone@6.15.0/babel.min.js"></script>

If you want better error message, you might want to add the crossorigin attribute to the <script> tags, as suggested in the official document. Why the attribute you ask? As describe in MDN, this attribute will allow your page to log errors on CORS scripts loaded from the CDN.

If you are looking for better performance, load the *.production.min.js instead of *.development.js.

2. Get rid of JSX

I’m actually not that against JSX now, but If you don’t want to include the babel.min.js script, you can consider using the React.createElement function. Actually all JSX elements are syntatic sugar for calling React.createElement(). Here are some examples:

<h1>Hello Word</h1>

can be written as

React.createElement('h1', null, 'Hello World')

And if you want to pass attributes around, you can do

<div onClick={this.props.clickHandler} data={this.state.data}>
  Click Me!
</div>
React.createElement('div', {
                      'onClick': this.props.clickHandler, 
                      'data': this.state.data
                    }, 
                    'Click Me!')

Of course you can have nested elements:

<div>
  <h1>Hello World</h1>
  <a>Click Me!</a>
</div>
React.createElement('div', null, 
  React.createElement('h1', null, 'Hello World')
  React.createElement('a', null, 'Click Me!')
)

You can read how this works in the official documentation.

  1. Split the React.js code into separate files

In the official HTML template, they show how to write script directly in HTML like:

<html>
  <body>
    <div id="root"></div>
    <script type="text/babel">

      ReactDOM.render(
        <h1>Hello, world!</h1>,
        document.getElementById('root')
      );

    </script>
  </body>
</html>

But for real-word projects we usually don’t want to throw everything into one big HTML file. So you can put everything between <script> and </script> in to a separate JavaScript file, let’s name it app.js and load it in the original HTML like so:

<html>
  <body>
    <div id="root"></div>
    <script src="app.js" type="text/babel"></script>
  </body>
</html>

The pitfall here is that you must keep the type="text/babel" attribute if you wants to use JSX. Otherwise the js script will fail when it first reaches a JSX tag, resulting an error like this:

SyntaxError: expected expression, got '<'[Learn More]        app.js:2:2

Using 3rd-party NPM components

Modules with browser support

You can find tons of ready-made React components on NPM, but the quality varies. Some of them are released with browser support, for example Reactstrap, which contains Bootstrap 4 components wrapped in React. In its documentation you can see a “CDN” section with a CDN link, which should just work by adding it to a script tag:

<!-- react-transition-group is required by reactstrap -->
<script src="https://unpkg.com/react-transition-group@2.2.1/dist/react-transition-group.min.js" charset="utf-8"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/reactstrap/4.8.0/reactstrap.min.js" charset="utf-8"></script>

then you can find the components in a gloabl variable Reactstrap:

<script type="text/babel" charset="utf-8">
  // "Import" the components from Reactstrap
  const {Button} = Reactstrap;

  // Render a Reactstrap Button element onto root
  ReactDOM.render(
    <Button color="danger">Hello, world!</Button>,
    document.getElementById('root')
  );
</script>

(In case you are curious, the first line is the destructing assignment of objects in JavaScript).

Of course it also works without JSX:

<script type="text/javascript" charset="utf-8">
  // "Import" the components from Reactstrap
  const {Button} = Reactstrap;

  // Render a Reactstrap Button element onto root
  ReactDOM.render(
    React.createElement(Button, {'color': 'danger'}, "Hello world!"),
    document.getElementById('root'),
  );
</script>

Modules without browser support

For modules without explicit browser support, you can still try to expose it to the browser with Browserify, as described in this post. Browserify is a tool that converts a Node.js module into something a browser can take. There are two tricks here:

  1. Use the --standalone option so Browserify will expose the component under the window namespace, so you don’t need a module system to use it.
  2. Use the browserify-global-shim plugin to strip all the usage of React and ReactDOM in the NPM module code, so it will use the React and ReactDOM we included using the <script> tags.

I’ll use a very simple React component on NPM, simple-react-modal, to illustrate this. First, we download this module to see what it looks like:

npm install simple-react-modal

If we go to node_modules/simple-react-modal, we can see a pre-built JavaScript package in the dist folder. Now we can install Browserify by npm install -g browserify. But we can’t just run it yet, because the code uses require('react') but we want to use our version loaded in the browser. So we need to install npm install browserify-global-shim and add the configuration to package.json:

// package.json
"browserify-global-shim": {
  "react": "React",
  "react-dom": "ReactDOM"
}

Now we can run

browserify node_modules/simple-react-modal \
  -o simple-react-modal-browser.js \
  --transform browserify-global-shim \
  --standalone Modal

We’ll get a simple-react-modal-browser.js file, which we can just load in the browser using the <script> tag. Then you can use the Modal like so:

<script type="text/javascript" charset="utf-8">
  // "Import" the components from Reactstrap
  const Modal = window.Modal.default;

  // Render a Reactstrap Button element onto root
  ReactDOM.render(
    React.createElement(Modal, 
      { 
        'show': true,
        'closeOnOuterClick': true 
      }, 
      React.createElement("h1", null, "Hello")
    ),
    document.getElementById('root')
  );
</script>

(There are some implementation detail about the simple-react-modal module in the above code, so don’t be worried if you don’t get everything.)

The benefits

Using this method, you can start prototyping by simply copying a HTML file. You don’t need to install Node.js, NPM and all the NPM modules that quickly make your small proof-of-concept page bloat.

Secondly, this method is compatible with the React-DevTools. Which is available in both Firefox and Chrome. So debugging is much easier.

Finally, It’s super easy to deploy the program. Simply drop the files into any web server (or use GitHub pages). The server doesn’t even need to run Node and NPM, any pure HTTP server will be sufficient. Other people can also easily download the HTML file and start hacking. This is a very nice way to rapidly prototype complex UIs without spending an extra hour setting up all the build steps (and maybe waste another 2 hour helping the team setting their environment).

Air MozillaReps Weekly Meeting, 08 Feb 2018

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Mozilla Localization (L10N)L10N Report: February Edition

Welcome!

New localizers

  • Kumar has recently joined us to localize in Angika. Welcome Kumar!
  • Francesca has joined Pontoon to localize Firefox in Friulan. Do you speak the language? Join her!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

  • Mixteco Yucuhiti (“meh”) locale was recently added to our l10n repositories and will soon have single-locale builds to test Firefox Android on!
  • Angika (“anp”) locale was added to Pontoon and will soon start to localize Focus for Android. Welcome!
  • Friulan (“fur”) has been enabled in Pontoon to localize Firefox, starting from old translations recovered from Pootle.

New content and projects

What’s new or coming up in Firefox desktop

Migration to FTL (Fluent)

In the past releases we reached a few small but important milestones for the Fluent project:

  • Firefox 58 was released on January 23 with the first ever Fluent string.
  • Firefox 59, which will be released on March 13, has 4 more Fluent strings. For this milestone we focused on the migration tools we created to seamlessly port translations from the old format (.properties, .DTD) to Fluent.

For Firefox 60, currently in Nightly, we aim to migrate as many strings as possible to Fluent for Firefox Preferences. The process for these migrations is detailed in this email to dev-l10n, and there are currently 2 patches almost ready to land, while a larger one for the General pane is in progress.

While Pontoon’s documentation already had a section dedicated to Fluent, constantly updated as the interface evolves, our documentation now has a section dedicated to Fluent for localizers, explaining the basic syntax and some of the specific features available in Gecko.

Plural forms

We already talked about plurals in the December report. The good news is that strings using the wrong number of plural forms are now reported on the l10n dashboard (example). Here’s a summary of all you need to know about plurals.

How plurals work in .properties files
Plural forms in Firefox and Firefox for Android are obtained using a hack on top of .properties files (plural forms are separated by a semicolon). For example:

#1 tab has arrived from #2;#1 tabs have arrived from #2

English has 2 plural forms, one for singular, and one for all other numbers. The situation is much more complex for other languages, reaching up to 5 or 6 plural forms. In Russian the same string has 3 forms, each one separated from the other by a semicolon:

С #2 получена #1 вкладка;С #2 получено #1 вкладки;С #2 получено #1 вкладок

The semicolon is a separator, not a standard punctuation element:

  • You should evaluate and translate each sentence separately. Some locales start the second sentence lowercase because of the semicolon, or with a leading space. Both are errors.
  • You shouldn’t replace the semicolon with a character from your script, or another punctuation sign (commas, periods). Again, that’s not a punctuation sign, it’s a separator.

Edge cases
Sometimes English only has one form, because the string is used for cases where the number is always bigger than 1.

;Close #1 tabs

Note that this string has still two plural forms, the first form (used for case ‘1’, or singular in English) is empty. That’s why the string starts with a semicolon. If your locale only has 1 form, you should drop the leading semicolon.

In other cases, the variable is indicated only in the second form:

Close one tab;Close #1 tabs

If your locale only has 1 form, or use the first case for more than ‘1’, use the second sentence as reference for your translation.

There are also cases of “poor” plural forms, where the plural is actually used as a replacement for logic, like “1 vs many”. These are bugs, and should be fixed. For example, this string was fixed in Firefox 59 (bug 658191).

Known limitations
Plurals form in Gecko are supported only in .properties files, and JavaScript code (not C++).

What about devtools?
If your locale has more plural forms than English, and you’re copying and pasting English into DevTools strings, the l10n dashboard will show warnings.

You can ignore them, as there’s no way to exclude locales from DevTools, or fix them by creating the expected number of plural forms by copying the English text as many times as needed.

Future of plurals
With Fluent, plurals become much more flexible, allowing locales to create special cases beyond the number of forms expected for their language.

What’s new or coming up in mobile

You might have noticed that Focus (iOS/Android) has been on a hiatus since mid-December 2017. That’s because the small mobile team is focusing on Firefox for Amazon Fire TV development at the moment!

We should be kicking things off again some time in mid-February. A firm date is not confirmed yet, but stay tuned on our dev-l10n mailing list for an upcoming announcement!

In the meantime, this means we are not shipping new locales on Focus, and we won’t be generating screenshots until the schedule resumes.

For Firefox on Fire TV – we are still figuring out which locales are officially supported by Amazon, and going to set up the l10n repositories to open it up to Mozilla localizations. There should also a language switcher in the works very soon, too.

Concerning the Firefox for iOS schedule, it’s almost time to kick-off l10n work for v11! Specific dates will be announced shortly – but expect strings to arrive towards the end of the month. March 29 will be the expected release date.

On the Firefox for Android front, we’ve now released v58. With this new version we bring you two new locales: Nepali (ne-NP) and Bengali from Bangladesh (bn-BD)!

We’re also in the process of adding Tagalog (tl), Khmer (km) and Mixteco Yucuhiti (meh) locales to all-locales to start Fennec single-locale builds.

What’s new or coming up in web projects

  • Marketing:
    • Firefox email: The team in charge of the monthly project targeting 6 locales will start following the standard l10n process by email team using bugzilla to communicate the initial requests, Pontoon to host the content, and l10n-driver sending the request through mailing list. Testing emails for verification purpose will be sent to those who worked on the project for the month. The process change has been communicated to the impacted communities. Thanks for responding so well to the change.
    • Regional single language request will also follow the standard process, moving localization tasks from Google docs to Pontoon. If you are pinged by marketing people for these requests through email or bugzilla, please let the l10n-drivers know. We want to make Pontoon the source of truth, the tool for community collaboration, for future localization references, consistency of terminology usage, for tracking contribution activity.
    • Mozilla.org has a slow start this year. Most updates have been cleanups and minor fixes. There have been discussions on redesigning the mozilla.org site so the entire site has a unified and modern look from one page to another. This challenges the current way of content delivery, which is at page level. More to share in the upcoming monthly reports.
  • AMO-Linter, a new project is enabled on Pontoon. This features target add-ons developers. As soon as the information on the feature, the release cycle, the staging server is available, the AMO documentation and Pontoon will be updated accordingly. In the meantime, report bugs by filing an issue.
  • Firefox Marketplace will be officially shut down on March 30th. Email communication was sent in English. However, a banner with the announcement was placed on the product in top 5 languages.

What’s new or coming up in Foundation projects

Our 2017 fundraising campaign just finished, but we’re already kicking off this year’s campaign.
One area we want to improve is our communication with donors, so starting in February we will send a monthly donor newsletter. This will help us better communicate how donations are put to use, and build a trust relationship with our supporters.
We will also start raising money much earlier. Our first fundraising email will be a fun one for Valentine’s Day.

A quick update on other localized campaigns:

  • The *Privacy not included website is being redesigned to remove the holiday references, and some product reviews might be added soon.
  • We expect to have some actions this spring around GDPR in Europe, but there is no concrete plan yet.
  • We’ve got some news on the Copyright reform — the JURI Committee will be tentatively voting on March 27th, so we will do some promotion of our call tool over the next few weeks.

The final countdown has started for the Internet Health Report! The second edition is on its way and should be published in March, this time again in English, German, French and Spanish.

What’s new or coming up in Pontoon

  • On February 3, Pontoon passed 3,000 registered users. Congratulations to Balazs Zubak for becoming the 3,000th registered user of Pontoon!
  • We’re privileged to have VishalCR7, karabellyj and maiquynhtruong join the Pontoon community of contributors recently. Stay tuned for more details about the work they are doing coming up soon in a blog post!

Friends of the Lion

Image by Elio Qoshi

Shout out to Adrien G, aka Alpha, for his continuous dedication to French localization on Pontoon and his great progress! He is now an official team member, and we’re happy to have him take on more responsibilities. Congrats!

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Hacks.Mozilla.OrgForging Better Tools for the Web

A Firefox DevTools Retrospective

2017 was a big year for Firefox DevTools. We updated and refined the UI, refactored three of the panels, squashed countless bugs, and shipped several new features. This work not only provides a faster and better DevTools experience, but lays the groundwork for some exciting new features and improvements for 2018 and beyond. We’re always striving to make tools and features that help developers build websites using the latest technologies and standards, including JavaScript frameworks and, of course, CSS Grid.

To better understand where we’re headed with Firefox Devtools, let’s take a quick look back.

2016

In 2016, the DevTools team kicked off an ambitious initiative to completely transition DevTools away from XUL and Firefox-specific APIs to modern web technologies. One of the first projects to emerge was debugger.html.

Debugger.html is not just an iteration of the old Firefox Debugger. The team threw everything out, created an empty repo, and set out to build a debugger from scratch that utilized reusable React components and a Redux store model.

The benefits of this modern architecture become obvious right away. Everything was more predictable, understandable, and testable. This approach also allows debugger.html to target more than just Firefox. It can target other platforms such as Chrome and Node.

We also shipped a new Responsive Design Mode in 2016 that was built using only modern web technologies.

2017

Last year, we continued to build on the work that was started in 2016 by building, and rebuilding parts of Firefox DevTools (and adding new features along the way). As a result, our developer tools are faster and more reliable. We also launched Firefox Quantum, which focused on browser speed, and performance.

Debugger

The debugger.html work that started in 2016 shipped to all channels with Firefox 56. We also added several new features and improvements, including better search tools, collapsed framework call-stacks, async stepping, and more.

Console

Just as with debugger.html, we shipped a brand-new Firefox console with Firefox Quantum. It has a new UI, and has been completely rewritten using React and Redux. This new console includes several new improvements such as the ability to collapse log groups, and the ability to inspect objects in context.

Network Monitor

We also shipped a new network monitor to all channels in Firefox 57. This new Network Monitor has a new UI, and is (you guessed it) built with modern web technologies such as React and Redux. It also has a more powerful filter UI, new Netmonitor columns, and more.

CSS Grid Layout Panel

Firefox 57 shipped with a new CSS Grid Layout Panel. CSS Grid is revolutionizing web design, and we wanted to equip designers and developers with powerful tools for building and inspecting CSS Grid layouts. You can read all about the panel features here; highlights include an overlay to visualize the grid, an interactive grid outline, displaying grid area names, and more.

Photon UI

We also did a complete visual refresh of the DevTools themes to coincide with the launch of Firefox Quantum and the new Photon UI. This refresh brings a design that is clean, slick, and easy to read.

2018 and Beyond

All of this work has set up an exciting future for Firefox DevTools. By utilizing modern web technologies, we can create, test, and deploy new features at a faster pace than when we were relying on XUL and Firefox-specific APIs.

So what’s next? Without giving too much away, here are just some of the areas we are focusing on:

Better Tools for Layouts and Design

This is 2018 and static designs made in a drawing program are being surpassed by more modern tools! Designing in the browser gives us the freedom to experiment, innovate, and build faster. Speaking with hundreds of developers over the past year, we’ve learned that there is a huge desire to bring better design tools to the browser.

We’ve been thrilled by overwhelmingly positive feedback around the CSS Grid Layout Panel and we’ve heard your requests for more tools that help design, build, and inspect layouts.

We are making a Firefox Inspector tool to make it easier to write Flexbox code. What do you want it to do the most? What’s the hardest part for you when struggling with Flexbox?
@jensimmons, 14 Nov 2017

I’m so pleased about this reaction to the Firefox Grid Inspector. That was the plan. We’ve just gotten started. More super-useful layout tools are coming in 2018.
@jensimmons, 24 Nov 2017

Better Tools for Frameworks

2017 was a banner year for JavaScript frameworks such as React and Vue. There are also older favorites such as Angular and Ember that continue to grow and improve. These frameworks are changing the way we build for the web, and we have ideas for how Firefox DevTools can better equip developers who work with frameworks.

An Even Better UI

The work on the Firefox DevTools UI will never be finished. We believe there is always room for improvement. We’ll continue to work with the Firefox Developer community to test and ship improvements.

New DevTools poll: Which of these three toolbar layouts do you prefer for the Network panel?
@violasong

More Projects on GitHub

We tried something new when we started building debugger.html. We decided to build the project in GitHub. Not only did we find a number of new contributors, but we received a lot of positive feedback about how easy it was to locate, manage, and work with the code. We will be looking for more opportunities to bring our projects to GitHub this year, so stay tuned.

Get Involved

Have an idea? Found a bug? Have a (gasp) complaint? We will be listening very closely to devtools users as we move into 2018 and we want to hear from you. Here are some of the ways you can join our community and get involved:

Join us on Slack

You can join our devtools.html Slack community. We also hang out on the #devtools channel on irc.mozilla.org

Follow us on Twitter

We have an official account that you can follow, but you can also follow various team members who will occasionally share ideas and ask for feedback. Follow @FirefoxDevTools here.

Contribute

If you want to get your hand dirty, you can become a contributor:

List of open bugs
GitHub

Download Firefox Developer Edition

Firefox Developer Edition is built specifically for developers. It provides early access to all of the great new features we have planned for 2018.

Thank you to everyone who has contributed so far. Your tweets, bug reports, feedback, criticisms, and suggestions matter and mean the world to us. We hope you’ll join us in 2018 as we continue our work to build amazing tools for developers.

Wladimir PalantEasy Passwords is now PfP: Pain-free Passwords

With the important 2.0 milestone I decided to give my Easy Passwords project a more meaningful name. So now it is called PfP: Pain-free Passwords and even has its own website. And that’s the only thing most people will notice, because the most important changes in this release are well-hidden: the crypto powering the extension got an important upgrade. First of all, the PBKDF2 algorithm for generating passwords was dumped in favor of scrypt which is more resistant to brute-force attacks. Also, all metadata written by PfP as well as backups are encrypted now, so that they won’t even leak information about the websites used. Both changes required much consideration and took a while to implement, but now I am way more confident about the crypto than I was back when Easy Passwords 1.0 was released. Finally, there is now an online version compiled from the same source code as the extensions and having mostly the same functionality (yes, usability isn’t really great yet, the user interface wasn’t meant for this use case).

Now that the hard stuff is out of the way, what’s next? The plan for the next release is publishing PfP for Microsoft Edge (it’s working already but I need to figure out the packaging), adding sync functionality (all encrypted just like the backups, so that in theory any service where you can upload files could be used) and importing backups created with a different master password (important as a migration path when you change your master password). After that I want to look into creating an Android client as well as a Node-based command line interface. These new clients had to be pushed back because they are most useful with sync functionality available.

The Mozilla BlogAnnouncing the Reality Redrawn Challenge Winners!

I’m delighted to announce the winners of Mozilla’s Reality Redrawn Challenge after my fellow judges and I received entries from around the globe. Since we issued the challenge just two months ago we have been astonished by the quality and imagination behind proposals that use mixed reality and other media to make the power of misinformation and its potential impacts visible and visceral.

If you have tried to imagine the impact of fake news – even what it smells like – when it touches your world, I hope you will come to experience the Reality Redrawn exhibit at the Tech Museum of Innovation in San Jose. Our opening night runs from 5-9pm on May 17th and free tickets are available here. Keep an eye on Twitter @mozilla with the hashtag #RealityRedrawn for more details in the coming weeks. After opening night you can experience the exhibit in normal daily museum hours for a limited engagement of two weeks, 10am-5pm. We will be looking to bring the winning entries to life also for those who are not in the Bay Area.

The winner of our grand prize of $15,000 is Yosun Chang from San Francisco with Bubble Chaos. Yosun has won many competitions including the Salesforce Dreamforce 2011 Hackathon, Microsoft Build 2016 Hackathon and TechCrunch Disrupt 2016 Hackathon. She will use augmented reality and virtual reality to create an experience that allows the user to interact with misinformation in a creative new way.

Yosun says of her entry: “We iPhoneX face track a user’s face to puppeteer their avatar, then bot and VR crowdsource lipreading that avatar to form political sides. This powers the visuals of a global macroscopic view showing thousands of nodes transmitting to create misinformation. We present also the visceral version where the user can try to “echo” their scented-colored bubble in a “bubble chamber” to make the room smell like their scent with multiple pivoting SensaBubble machines.”

Our second prize joint semi-finalist is Stu Campbell (aka Sutu) from Roeburne in Western Australia. Sutu will receive $7,500 to complete the creation of his entry FAKING NEWS. He is known for ‘Nawlz’, a 24 episode interactive cyberpunk comic book series created for web and iPad. In 2016 he was commissioned by Marvel and Google to create Tilt Brush Virtual Reality paintings. He was also the feature subject of the 2014 documentary, ‘Cyber Dreaming’.

As Sutu explains: “The front page of a newspaper will be reprinted in a large format and mounted to the museum wall. Visitors will also find physical copies of the paper in the museum space. Visitors will be encouraged to download our EyeJack Augmented Reality app and then hold their devices over the paper to see the story augment in real time. Small fake news bots will animate across the page, rearranging and deleting words and inserting news words and images. The audience then has the option to share the new augmented news to their own social media channels, thus perpetuating its reach.”

Mario Ezekiel Hernandez from Austin also receives $7,500 to complete his entry: Where You Stand. Mario graduated from Texas State University in 2017 with a degree in Applied Mathematics. He currently works as a data analyst and is a member of the interactive media arts collective, vûrv.

Mario’s entry uses TouchDesigner, Python, R, OpenCV, web cameras, projectors, and a mac mini. Mario says of his entry: “Our solution seeks to shine a light on the voices of policymakers and allow participants to freely explore the content that is being promoted by their legislative representatives. The piece dynamically reacts to actor locations. As they move along the length of the piece tweets from each legislator are revealed and hidden. To highlight the polarization we group the legislators by party alignment so that the most partisan legislators are located at the far ends of the piece. As participants move away from the middle in either direction, they will see more tweets from increasingly partisan legislators.”

Emily Saltz is a UX Designer from Bloomberg LP and will be traveling from New York with her entry Filter Bubble Roulette, after receiving prize money of $5,000. Previously she was UX and Content Strategist at Pop Up Archive, an automatic speech recognition service and API acquired by Apple.

Emily says of her entry: “This social webVR platform plays into each user’s curiosity to peek into other social media filter bubbles, using content pulled from social media as conversational probes. It will enable immersive connection people across diverse social and political networks. The project is based on the hypotheses that 1) users are curious to peek into the social media universes of others, 2) it’s harder to be a troll when you’re immersed in someone else’s 3D space, and 3) viewing another person’s filter bubble in context of their other interests will enable more reflection and empathy between groups.”

Rahul Bhargava is a researcher and technologist specializing in civic technology and data literacy at the MIT Center for Civic Media. There he leads technical development on projects ranging from interfaces for quantitative news analysis, to platforms for crowd-sourced sensing. Based in Boston, he also won $5,000 to create his entry Gobo: understanding social media algorithms.

Rahul says of his entry: “The public lacks a basic understanding of the algorithm-driven nature of most online platforms. In parallel, technology companies generally place blind trust in algorithms as “neutral” actors in content promotion. Our idea tackles this perfect storm with a card-driven interactive piece, where social media content is scored with a variety of algorithms and prompts to discuss how those can drive content filtering and promotion. Visitors are engaged to use these scores as inputs to construct their own meta-algorithm, deciding whether things like “gender” detection, “rudeness” ranking, or “sentiment” analysis would drive which content they want to see.”

The Reality Redrawn Challenge is part of the Mozilla Information Trust Initiative announced last year to build a movement to fight misinformation online. The initiative aims to stimulate work towards this goal on products, research, literacy and creative interventions.

The post Announcing the Reality Redrawn Challenge Winners! appeared first on The Mozilla Blog.

Mozilla ThunderbirdWhat Thunderbird Learned at FOSDEM

Hello everyone! I’m writing this following a visit to Brussels this past weekend to the Free and Open Source Software conference called FOSDEM. As far as I know it is one of the largest, if not the largest FOSS conference in Europe. It proved to be a great opportunity to discuss Thunderbird with a wide range of contributors, users, and interested developers – and the feedback I received at the event was fantastic (and helpful)!

First, some background, the Thunderbird team was stationed in the Mozilla booth, on the second floor of building K. We were next to the Apache Software Foundation and the Kopano Collaborative software booths (the Kopano folks gave us candy with “Mozilla” printed on it – very cool). We had hundreds of people stop by the booth and I got to ask a bunch of them about what they thought of Thunderbird. Below are some insights I gained from talking to the FOSDEM attendees.

Feedback from FOSDEM

1. I thought the project was dead. What’s the plan for the future of Thunderbird?

This was the number one thing I heard repeatedly throughout the conference. This is not surprising as, while the project has remained active following its split from Mozilla corp, it has not been seen to push the boundaries or made a lot of noise about its own initiatives. We, as the Thunderbird community, should be planning on the future and what that looks like – once we have a concrete roadmap, we should share that with the world to solicit interest and enthusiasm.

For fear of this question being misunderstood, this was never asked with malevolent intent or in a dismissive way (as far as I could tell). Most of the people who commented on the project being dead were generally interested in using Thunderbird (or were still), but didn’t realize anyone was actively doing development. I got many stories where people shared their relief saying “I was planning on having to move to something else for a mail client, but now that I’ve seen the project making plans, I’m going to stay with it.”

Currently, we have a lot to talk about regarding the future of Thunderbird. We have made new hires (yours truly included), we are hiring a developer to work on various parts of the project, and we are working with organizations like Monterail in order to get feedback on the interface. With the upcoming Thunderbird Council elections, the Community will get an opportunity to shape the leadership of the project as well.

2. I would like to see a mobile app.

The second most prevalent thing expressed to me at FOSDEM was the desire for a Thunderbird mobile app. When I asked what that might look like the answers were uniformly along the lines of: “There is not a really good, open source, Email client on mobile. Thunderbird seems like a great project with the expertise to solve that.”

3. Where’s the forum?

Heard this a few times and was surprised out how adamant the people asking were. They pointed out that they were Thunderbird users, but weren’t really into mailing lists. I had it iterated to me a handful of times that Discourse allows you to respond via Email or the website. As a result I have begun working on setting something up.

The biggest barrier I see to making a forum a core part of the community effort is getting buy-in from MOST of the contributors to the project currently. So, over the next week I’m going to try and get an idea of who is interested in participating and who is opposed.

4. I want built-in Encryption

This was a frequent request asked for in two forms, repeatedly. First, “How can I encrypt my Thunderbird Email?” and second, “Can you make encryption a default feature?” – the frequency with which this was asked indicates that this is important to this segment of our users (open source, technical).

To those who are curious as to how to encrypt your mail currently – the answer is you may use the Enigmail extension. In the future, we may be able to make this easier by having it built-in to Thunderbird and making it possible to enable in the settings. But that is a discussion that the community and developers need to explore further.

Final Thoughts

In closing, I heard a great many things beyond those four key points above – but many were thoughts on specific bugs people experienced (you can file bugs here), or just comments on how people used mostly webmail these days. On that second point, I heard that so frequently that I began to wonder what more we could offer as a project that would provide added value to users over what things like GMail, Inbox, and Outlook365 were offering.

All-around FOSDEM was a great event, met great people, heard amazing talks, and got to spread the good word of Thunderbird. Would love to hear the community’s ideas on what they think of what I heard, that means you, so please leave a comment below.

Mozilla Marketing Engineering & Ops BlogMDN Changelog for January 2018

Here’s what happened in January to the code, data, and tools that support MDN Web Docs:

Here’s the plan for February:

Done in January

Completed CSS Compatibility Data Migration and More

Thanks to Daniel D. Beck and his 83 Pull Requests, the CSS compatibility data is migrated to the browser-compat-data repository. This finishes Daniel’s current contract, and we hope to get his help again soon.

The newly announced MDN Product Advisory Board supports the Browser Compatibility Data project, and members are working to migrate more data. In January, we saw an increase in contributions, many from first-time contributors. The migration work jumped from 39% to 43% complete in January. See the contribution guide to learn how to help.

On January 23, we turned on the new browser compatability tables for all users. The new presentation provides a good overview of feature support across desktop and mobile browsers, as well as JavaScript run-time environments like Node.js, while still letting implementors dive into the details.

Florian Scholz promoted the project with a blog post, and highlighted the compat-report addon by Eduardo Bouças that uses the data to highlight compatibility issues in a developer tools tab. Florian also gave a talk about the project on February 3 at FOSDEM 18. We’re excited to tell people about this new resource, and see what people will do with this data.

compat-report

Shipped a New Method for Declaring Language Preference

If you use the language switcher on MDN, you’ll now be asked if you want to always view the site in that language. This was added by Safwan Rahman in PR 4321.

language-switcher

This preference goes into effect for our “locale-less” URLs. If you access https://developer.mozilla.org/docs/Web/HTML, MDN uses your browser’s preferred language, as set by the Accept-Language header. If it is set to Accept-Language: en-US,en;q=0.5, then you’ll get the English page at https://developer.mozilla.org/en-US/docs/Web/HTML, while Accept-Language: de-CH will send you to the German page at https://developer.mozilla.org/de/docs/Web/HTML. If you’ve set a preference with this new dialog box, the Accept-Language header will be ignored and you’ll get your preferred language for MDN.

This is useful for MDN visitors who like to browse the web in their native language, but read MDN in English, but it doesn’t fix the issue entirely. If a search engine thinks you prefer German, for instance, it will pick the German translations of MDN pages, and send you to https://developer.mozilla.org/de/docs/Web/HTML. MDN respects the link and shows the German page, and the new language preference is not used.

We hope this makes MDN a little easier to use, but more will be needed to satisfy those who get the “wrong” page. I’m not convinced there is a solution that will work for everyone. I’ve suggested a web extension in bug 1432826, to allow configurable redirects, but it is unclear if this is the right solution. We’ll keep thinking about translations, and adjusting to visitors’ preferences.

Increased Availability of MDN

MDN easily serves millions of visitors a month, but struggles under some traffic patterns, such as a single visitor requesting every page on the site. We continue to make MDN more reliable despite these traffic spikes, using several different strategies.

The most direct method is to limit the number of requests. We’ve updated our rate limiting to return the HTTP 429 “Too Many Requests” code (PR 4614), to more clearly communicate when a client hits these limits. Dave Parfitt automated bans for users making thousands of requests a minute, which is much more than legitimate scrapers.

Another strategy is to reduce the database load for each request, so that high traffic doesn’t slow down the database and all the page views. We’re reducing database usage by changing how async processes store state (PR 4615) and using long-lasting database connections to reduce time spent establishing per-request connections (PR 4644).

Safwan Rahman took a close look at the database usage for wiki pages, and made several changes to reduce both the number of queries and the size of the data transmitted from the database (PR 4630). This last change has significantly reduced the network traffic to the database.

network-traffic-drop response-time

All of these add up to a 10% to 15% improvement in server response time from December’s performance.

Ryan Johnson continued work on the long-term solution, to serve MDN content from a CDN. This requires getting our caching headers just right (PR 4638). We hope to start shipping this in February. At that point, a high-traffic user may still slow down the servers, but most people will quickly get their content from the CDN instead.

Shipped Tweaks and Fixes

There were 326 PRs merged in January:

67 of these were from first-time contributors:

Other significant PRs:

Planned for February

Continue Development Projects

In February, we’ll continue working on our January projects. Our plans include:

  • Converting more compatibility data
  • Serving developer.mozilla.org from a CDN
  • Updating third-party libraries for compatibility with Django 1.11
  • Designing interactive examples for more complex scenarios
  • Preparing for a team meeting and “Hack on MDN” event in March

See the December report for more information on these projects.

K Lars LohnLars and the Real Internet of Things - Part 1

This is the first in a series of blog postings about the Internet of Things (IoT).  I'm going to cover some history, and then talk about and demonstrate Mozilla's secure privacy protecting Things Gateway and finally talk about writing the software for my own IoT devices to work with the Things Gateway.  


First, though, my history with home automation:

When I was a teenager in the 1970s, I had an analog alarm clock with an electrical outlet on the back labeled "coffee".  About ten minutes before the alarm would go off, it would turn on the power to the outlet.  This was apparently to start a coffee maker that had been setup the night before.  I, instead, used the outlet to turn on my record player so I could wake to music of my own selection.  Ten years after the premier of the Jetsons automated utopia, this was the extent of home automation available to the average consumer.

By the late 1970s and into the 1980s, the landscape changed in consumer home automation.  A Scottish electronics company conceived of a remote control system that would communicate over power lines.  By the mid 1980s, the X10 system of controllers and devices was available at Radio Shack and many other stores.

I was an early adopter of this technology, automating lamps, ceiling lights and porch lights.  After the introduction of an RS-232 controller that allowed the early MS-DOS PCs to control lights, I was able to get porch lights to follow sunrise, sunset and daylight savings rules.

X10 was unreliable.  In communicating over power lines, it encoded its data into the momentary zero voltage between the peaks of alternating current: it maxed out at about 20 bits per second.  Nearly anything could garble communication: the dish washer, the television, the neighbor's electric drill.  Many of the components were poorly manufactured.  Wall switches not only completely lacked style and ergonomics, but they would last only a year or so before requiring replacement. In 1990, a power surge during a thunderstorm wiped out almost all of my X10 devices.  I was done with X10, it was too expensive and unreliable. 

For the next twenty years, I lived just fine without home automation, but the industry advanced.  Insteon, Z-Wave and Zigbee were all invented in the 2000s for home automation.  Their high cost and my soured experience with X10, kept me away.

In the last ten years, there has been a renaissance in home automation in connection with the Internet: the Internet of Things.  I looked at the new options, saw they were still expensive and they had new flaws: security and privacy.  I bought a couple of the Belkin Wemo devices that I could control with my iPhone and found they were, like X10, unreliable.  Sometimes they'd work and sometimes they wouldn't.  Then in 2013, a security flaw was found that could result in someone else taking control or even invade the home network.  The Wemo devices required a firmware security update and on hurting my back crawling behind the couch to do the update, I decided they were not worth the effort.  The Wemo devices were added to my local landfill.


I watched from the sidelines as more and more companies jumped into the IoT field.  A little research showed how ZWave and Zigbee devices could be more secure, but with two competing incompatible standards, how could I decide?  I didn't want to buy the wrong thing and then suffer a orphaned  system.  I couldn't justify the expense.

What really got me interested again were the Phillips Hue system of color changeable lights.  The cost coupled with Phillips on again, off again willingness to allow third party products to interact with their hub, forestalled my adoption.

I held back until the Samsung SmartThings device was introduced.  Here was a smart home hub that could talk to both ZWave and Zigbee devices.  I added one to my Amazon shopping cart along with a number of lamp controller switches.  I didn't press the buy button because I was looking for the flaw.  Of course, there was one, a big one: the Internet was required.  Since it relied on mobile phones to control the Smart Home hub, if the Internet was down, the control of the devices stopped.  Or so it seemed, the documentation was vague on the subject.  I finally confirmed it by talking with an acquaintance that had the system.  This system was not for me.

I was again a IoT wallflower, longing to dance but unwilling to step onto the dance floor.

In December of 2017, however, I saw a demonstration of a new experimental system from Mozilla called the Things Gateway.  It offers a protocol agnostic control over IoT devices.  It can control the Z-Wave and the Zigbee stuff at the same time.  The software runs on a computer, even a Raspberry Pi.  Because it offers a web server on the local home network, any Web browser on a phone, tablet or desktop machine at home can control it.  Unlike most commercial IoT controllers, if the internet is out, I can still control things while I'm home.  As a plus, Mozilla offers a secure method from the internet to the local Things Gateway web server. For many folks controlling things while away from home is important, for me, I could do without that feature.

The final convincing argument?  It's open source and completely customizable.   I cannot resist any longer.

My next blog posting will walk through the process of downloading and setting up a Mozilla Things Gateway.   I'll show how I connected  Z-Wave, Zigbee and Philips Hue lights into one smart home network.  Subsequent postings will show how I can use the Python programming language to enable new devices to join the Internet of Things.

I'm quite excited about this project.

Mozilla Things Gateway

Mozilla Hacks Blog about Things Gateway


Hacks.Mozilla.OrgHow to build your own private smart home with a Raspberry Pi and Mozilla’s Things Gateway

Last year we announced Project Things by Mozilla. Project Things is a framework of software and services that can bridge the communication gap between connected devices by giving “things” URLs on the web.

Today I’m excited to tell you about the latest version of the Things Gateway and how you can use it to directly monitor and control your home over the web, without a middleman. Instead of installing a different mobile app for every smart home device you buy, you can manage all your devices through a single secure web interface. This blog post will explain how to build your own Web of Things gateway with a Raspberry Pi and use it to connect existing off-the-shelf smart home products from various different brands using the power of the open web.

There are lots of exciting new features in the latest version of the gateway, including a rules engine for setting ‘if this, then that’ style rules for how things interact, a floorplan view to lay out devices on a map of your home, experimental voice control and support for many new types of “things”. There’s also a brand new add-ons system for adding support for new protocols and devices, and a new way to safely authorise third party applications to access your gateway.

Hardware

The first thing to do is to get your hands on a Raspberry Pi® single board computer. The latest Raspberry Pi 3 has WiFi and Bluetooth support built in, as well as access to GPIO ports for direct hardware connections. This is not essential as you can use alternative developer boards, or even your laptop or desktop computer, but it currently provides the best experience.

If you want to use smart home devices using other protocols like Zigbee or Z-Wave, you will need to invest in USB dongles. For Zigbee we currently support the Digi XStick (ZB mesh version). For Z-Wave you should be able to use any OpenZWave compatible dongle, but so far we have only tested the Sigma Designs UZB Stick and the Aeotec Z-Stick (Gen5). Be sure to get the correct device for your region as Z-Wave operating frequencies can vary between countries.

You’ll also need a microSD card to flash the software onto! We recommend at least 4GB.

Then there’s the “things” themselves. The gateway already supports many different smart plugs, sensors and smart bulbs from lots of different brands using Zigbee, Z-Wave and WiFi. Take a look at the wiki for devices which have already been tested. If you would like to contribute, we are always looking for volunteers to help us test more devices. Let us know what other devices you’d like to see working and consider building your own adapter add-on to make it work! (see later).

If you’re not quite ready to splash out on all this hardware, but you want to try out the gateway software, there’s now a Virtual Things add-on you can install to add virtual things to your gateway.

Software

Next you’ll need to download the Things Gateway 0.3 software image for the Raspberry Pi and flash it onto your SD card. There are various ways of doing this but Etcher is a graphical application for Windows, Linux and MacOS which makes it easy and safe to do.

If you want to experiment with the gateway software on your laptop or desktop computer, you can follow the instructions on GitHub to download and build it yourself. We also have an experimental OpenWrt package and support for more platforms is coming soon. Get in touch if you’re targeting a different platform.

First Time Setup

Before booting up your gateway with the SD card inserted, ensure that any Zigbee or Z-Wave USB dongles are plugged in.

When you first boot the gateway, it acts as a WiFi hotspot broadcasting the network name (SSID) “Mozilla IoT Gateway”. You can connect to that WiFi hotspot with your laptop or smartphone which should automatically direct you to a setup page. Alternatively, you can connect the Raspberry Pi directly to your network using a network cable cable and type gateway.local into your browser to begin the setup process.

First, you’re given the option to connect to a WiFi network:

 

If you choose to connect to a WiFi network you’ll be prompted for the WiFi password and then you’ll need to make sure you’re connected to that same network in order to continue setup.

Next, you’ll be asked to choose a unique subdomain for your gateway, which will automatically generate an SSL certificate for you using LetsEncrypt and set up a secure tunnel to the Internet so you can access the gateway remotely. You’ll be asked for an email address so you can reclaim your subdomain in future if necessary. You can also choose to use your own domain name if you don’t want to use the tunneling service, but you’ll need to generate your own SSL certificate and configure DNS yourself.

You will then be securely redirected to your new subdomain and you’ll be prompted to create your user account on the gateway.

You’ll then automatically be logged into the gateway and will be ready to start adding things. Note that the gateway’s web interface is a Progressive Web App that you can add to homescreen on your smartphone with Firefox.

Adding Things

To add devices to your gateway, click on the “+” icon at the bottom right of the screen. This will put all the attached adapters into pairing mode. Follow the instructions for your individual device to pair it with the gateway (this often involves pressing a button on the device while the gateway is in pairing mode).

Devices that have been successfully paired with the gateway will appear in the add device screen and you can give them a name of your choice before saving them on the gateway.

The devices you’ve added will then appear on the Things screen.

You can turn things on and off with a single tap, or click on the expand button to go to an expanded view all of all the thing’s properties. For example a smart plug has an on/off switch and reports its current power consumption, voltage, current and frequency.

With a dimmable colour light, you can turn the light on and off, set its colour, and set its brightness level.

Rules Engine

By clicking on the main menu you can access the rules engine.

The rules engine allows you to set ‘if this, then that’ style rules for how devices interact with each other. For example, “If Smart Plug A turns on, turn on Smart Plug B”.

To create a rule, first click the “+” button at the bottom right of the rules screen. Then drag and drop things onto the screen and select the properties of the things you wish to connect together.

 

You can give your rule a name and then click back to get back to the rules screen where you’ll see your new rule has been added.

Floorplan

Clicking on the “floorplan” option from the main menu allows you to arrange devices on a floorplan of your home. Click the edit button at the bottom right of the screen to upload a floorplan image.

You’ll need to create the floorplan image yourself. This can be done with an online tool or graphics editor, or you can just scan of a hand drawn map of your home! An SVG file with white lines and a transparent background works best.

You can arrange devices on the floor plan by dragging them around the screen.

Just click “save” when you’re done and you’ll see all of your devices laid out. You can click on them to access their expanded view.

Add-ons

The gateway has an add-ons system so that you can extend its capabilities. It comes with the Zigbee and Z-Wave adapter add-ons installed by default, but you can add support for additional adapters through the add-ons system under “settings” in the main menu.

Click the “+ Add” button on any add-on you want to install.

For example, there is a Virtual Things add-on which allows you to experiment with different types of web things without needing to buy any real hardware. Click the “+” button at the bottom right of the screen to see a list of available add-ons.

Click the “+ Add” button on any add-ons you want to install. When you navigate back to the add-ons screen you’ll see the list of add-ons that have been installed and you can enable or disable them.

In the next blog post, you’ll learn how to create, package, and share your own adapter add-ons in the programming language of your choice (e.g. JavaScript, Python or Rust).

Voice UI

The gateway also comes with experimental voice controls which are turned off by default. You can enable this feature through “experiments” in settings.

Once the “Speech Commands” experiment is turned on you’ll notice a microphone icon appear at the top right of the things screen.

If the smartphone or PC you’re using has a microphone you can tap the microphone and issue a voice command like “Turn kitchen on” to control devices connected to the gateway.

The voice control is still very experimental and doesn’t yet recognise a very wide range of vocabulary, so it’s best to try to stick to common words like kitchen, balcony, living room, etc. This is an area we’ll be working on improving in future, in collaboration with the Voice team at Mozilla.

Updates

Your gateway software should automatically keep itself up to date with over-the-air updates from Mozilla. You can see what version of the gateway software you’re running by clicking on “updates” in Settings.

What’s Coming Next?

In the next release, the Mozilla IoT team plans to create new gateway adapters to connect more existing smart home devices to the Web of Things. We are also starting work on a collection of software libraries in different programming languages, to help hackers and makers build their own native web things which directly expose the Web Thing API, using existing platforms like Arduino and Android Things. You will then be able to add these things to the gateway by their URL.

We will continue to contribute to standardisation of a Web Thing Description format and API via the W3C Web of Things Interest Group. By giving connected devices URLs on the web and using a standard data model and API, we can help create more interoperability on the Internet of Things.

The next blog post will explain how to build, package and share your own adapter add-on using the programming language of your choice, to add new capabilities to the Things Gateway.

How to Contribute

We need your help! The easiest way to contribute is to download the Things Gateway software image (0.3 at the time of writing) and test it out for yourself with a Raspberry Pi, to help us find bugs and suggest new features. You can view our source code and file issues on GitHub. You can also help us fix issues with pull requests and contribute your own adapters for the gateway.

If you want to ask questions, you can find us in #iot on irc.mozilla.org or the “Mozilla IoT” topic in Discourse. See iot.mozilla.org for more information and follow @MozillaIoT on Twitter if you want to be kept up to date with developments.

Happy hacking!

 

The Mozilla BlogAnnouncing “Project Things” – An open framework for connecting your devices to the web.

Last year, we said that Mozilla is working to create a framework of software and services that can bridge the communication gap between connected devices. Today, we are pleased to announce that anyone can now build their own Things Gateway to control their connected device directly from the web.

We kicked off “Project Things”, with the goal of building a decentralized ‘Internet of Things’ that is focused on security, privacy, and interoperability. Since our announcement last year, we have continued to engage in open and collaborative development with a community of makers, testers, contributors, and end-users, to build the foundation for this future.

Today’s launch makes it easy for anyone with a Raspberry Pi to build their own Things Gateway. In addition to web-based commands and controls, a new experimental feature shows off the power and ease of using voice-based commands. We believe this is the most natural way for users to interact with their smart home. Getting started is easy, and we recommend checking out this tutorial to get connected.

The Future of Connected Devices

Internet of Things (IoT) devices have become more popular over the last few years, but there is no single standard for how these devices should talk to each other. Each vendor typically creates a custom application that only works with their own brand. If the future of connected IoT devices continues to involve proprietary solutions, then costs will stay high, while the market remains fragmented and slow to grow. Consumers should not be locked into a specific product, brand, or platform. This will only lead to paying premium prices for something as simple as a “smart light bulb”.

We believe the future of connected devices should be more like the open web. The future should be decentralized, and should put the power and control into the hands of the people who use those devices. This is why we are committed to defining open standards and frameworks.

A Private “Internet of Things”

Anyone can build a Things Gateway using popular devices such as the Raspberry Pi. Once it is set up, it will guide you through the process of connecting to your network and adding your devices. The setup process will provide you with a secure URL that can be used to access and control your connected devices from anywhere.

Powerful New Features

Our latest release of the Things Gateway has several new features available. These features include:

  • The ability to use the microphone on your computer to issue voice commands
  • A rules engine for setting ‘If this, then that’ logic for how devices interact with each other
  • A floor-plan view to lay out devices on a map of your home
  • Additional device type support, such as smart plugs, dimmable and colored lights, multi-level switches and sensors, and “virtual” versions of them, in case you don’t have a real device
  • An all-new add-on system for supporting new protocols and devices
  • A new system for safely authorizing third-party applications (using OAuth)

Built for hackers everyone

If you have been following our progress with Project Things, you’ll know that up to now, it was only really accessible to those with a good amount of technical knowledge. With today’s release, we have made it easy for anyone to get started on building their own Things Gateway to control their devices. We take care of the complicated stuff so that you can focus on the fun stuff such as automation, ‘if this, then that’ rules, adding a greater variety of devices, and more.

Getting Started

We have provided a full walkthrough of how to get started on building your own private smart home using a Raspberry Pi. You can view the complete walkthrough here.

If you have questions, or you would like to get involved with this project you can join the #iot channel on irc.mozilla.org and participate in the development on GitHub. You can also follow @MozillaIoT on twitter for the latest news.

For more information, please visit iot.mozilla.org.

The post Announcing “Project Things” – An open framework for connecting your devices to the web. appeared first on The Mozilla Blog.

Daniel StenbergNordic Free Software Award reborn

Remember the glorious year 2009 when I won the Nordic Free Software Award?

This award tradition that was started in 2007 was put on a hiatus after 2010 (I believe) and there has not been any awards handed out since, and we have not properly shown our appreciation for the free software heroes of the Nordic region ever since.

The award has now been reignited by Jonas Öberg of FSFE and you’re all encourage to nominate your favorite Nordic free software people!

Go ahead and do it right away! You only have to the end of February so you better do it now before you forget about it.

I’m honored to serve on the award jury together with previous award winners.

This year’s Nordic Free Software Award winner will be announced and handed their prize at the FOSS-North conference on April 23, 2018.

(Okay, yes, the “photo” is a montage and not actually showing a real trophy.)

Don MartiFun with numbers

(I work for Mozilla. None of this is secret. None of this is official Mozilla policy. Not speaking for Mozilla here.)

Guess what? According to Emil Protalinski at VentureBeat, the browser wars are back on.

Google is doubling down on the user experience by focusing on ads and performance, an opportunity I’ve argued its competitors have completely missed.

Good point. Jonathan Mendez has some good background on that.

The IAB road blocked the W3C Do Not Track initiative in 2012 that was led by a cross functional group that most importantly included the browser makers. In hindsight this was the only real chance for the industry to solve consumer needs around data privacy and advertising technology. The IAB wanted self-regulation. In the end, DNT died as the IAB hoped.

As third-party tracking made the ad experience crappier and crappier, browser makers tried to play nice. Browser makers tried to work in the open and build consensus.

That didn't work, which shouldn't be a surprise. Imagine if email providers had decided to build consensus with spammers about spam filtering rules. The spammers would have been all like, "It replaces the principle of consumer choice with an arrogant 'Hotmail knows best' system." Any sensible email provider would ignore the spammers but listen to deliverability concerns from senders of legit opt-in newsletters. Spammers depend on sneaking around the user's intent to get their stuff through, so email providers that want to get and keep users should stay on the user's side. Fortunately for legit mail senders and recipients, that's what happened.

On the web, though, not so much.

But now Apple Safari has Intelligent Tracking Prevention. Industry consensus achieved? No way. Safari's developers put users first and, like the man said, if you're not first you're last.

And now Google is doing their own thing. Some positive parts about it, but by focusing on filtering annoying types of ad units they're closer to the Adblock Plus "Acceptable Ads" racket than to a real solution. So it's better to let Ben Williams at Adblock Plus explain that one. I still don't get how it is that so many otherwise capable people come up with "let's filter superficial annoyances and not fundamental issues" and "let's shake down legit publishers for cash" as solutions to the web advertising problem, though. Especially when $16 billion in adfraud is just sitting there. It's almost as if the Lumascape doesn't care about fraud because it's priced in so it comes out of the publisher's share anyway.

So with all the money going to fraud and the intermediaries that facilitate it, local digital news publishers are looking for money in other places and writing off ads. That's good news for the surviving web ad optimists (like me) because any time Management stops caring about something you get a big opportunity to do something transformative.

Small victories

The web advertising problem looks big, but I want to think positive about it.

  • billions of web users

  • visiting hundreds of web sites

  • with tens of third-party trackers per site.

That's trillions of opportunities for tiny victories against adfraud.

Right now most browsers and most fraudbots are hard to tell apart. Both maintain a single "cookie jar" across trusted and untrusted sites, and both are subject to fingerprinting.

For fraudbots, cross-site trackability is a feature. A fraudbot can only produce valuable ad impressions on a fraud site if it is somehow trackable from a legit site.

For browsers, cross-site trackability is a bug, for two reasons.

  • Leaking activity from one context to another violates widely held user norms.

  • Because users enjoy ad-supported content, it is in the interest of users to reduce the fraction of ad budgets that go to fraud and intermediaries.

Browsers don't have the solve the whole web advertising problem to make a meaningful difference. As soon as a trustworthy site's real users look diffferent enough from fraudbots, because fraudbots make themselves more trackable than users running tracking-protected browsers do, then low-reputation and fraud sites claiming to offer the same audience will have a harder and harder time trying to sell impressions to agencies that can see it's not the same people.

Of course, the browser market share numbers will still over-represent any undetected fraudbots and under-represent the "conscious chooser" users who choose to turn on extra tracking protection options. But that's an opportunity for creative ad agencies that can buy underpriced post-creepy ad impressions and stay away from overvalued or worthless bot impressions. I expect that data on who has legit users—made more accurate by including tracking protection measurements—will be proprietary to certain agencies and brands that are going after customer segments with high tracking protection adoption, at least for a while.

Bonus links

Now even YouTube serves ads with CPU-draining cryptocurrency miners http://arstechnica.com/information-technology/2018/01/now-even-youtube-serves-ads-with-cpu-draining-cryptocurrency-miners/ … by @dangoodin001

Remarks delivered at the World Economic Forum

Improving privacy without breaking the web

Greater control with new features in your Ads Settings

PageFair’s long letter to the Article 29 Working Party

‘Never get high on your own supply’ – why social media bosses don’t use social media

Can you detect WebDriver sessions from inside a web page? https://hoosteeno.com/2018/01/23/can-you-detect-webdriver-sessions-from-inside-a-web-page/ … via @wordpressdotcom

Making WebAssembly even faster: Firefox’s new streaming and tiering compiler

Newsonomics: Inside L.A.’s journalistic collapse

The State of Ad Fraud

The more Facebook examines itself, the more fault it finds

In-N-Out managers earn triple the industry average

Five loopholes in the GDPR

Why ads keep redirecting you to scammy sites and what we’re doing about it

https://digiday.com/media/local-digital-news-publishers-ignoring-display-revenue/

Website operators are in the dark about privacy violations by third-party scripts

Mark Zuckerberg's former mentor says 'parasitic' Facebook threatens our health and democracy

Craft Beer Is the Strangest, Happiest Economic Story in America

The 29 Stages Of A Twitterstorm In 2018

How Facebook Helped Ruin Cambodia's Democracy

How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda

Firefox 57 delays requests to tracking domains

Direct ad buys are back in fashion as programmatic declines

‘Data arbitrage is as big a problem as media arbitrage’: Confessions of a media exec

Why publishers don’t name and shame vendors over ad fraud

News UK finds high levels of domain spoofing to the tune of $1 million a month in lost revenue • Digiday

The Finish Line in the Race to the Bottom

Something doesn’t ad up about America’s advertising market

Fraud filters don't work

Ad retargeters scramble to get consumer consent

This Week In RustThis Week in Rust 220

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is datafusion, a query planner/execution framework for Big Data processing. Thanks to andygrove for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

115 pull requests were merged in the last week

New Contributors

  • Araam Borhanian
  • dpc
  • Jay Strict
  • Jonathan Goodman
  • Matthias Krüger
  • oberien
  • Onur Aslan
  • penpalperson
  • Per Lundberg

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust has a very high friction coefficient.

We call it grip and it lets us drive fearlessly around hard corners very fast.

u/asmx85 on reddit.

Thanks to JustAPerson for the suggestion!

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Emma Irwin(New) Diversity & Inclusion In Open Source — Community Call!

 

Last year, after three months of qualitative and quantitative research, we published a series of recommendations for D&I in open source. Since this time, we’ve been busy implementing many of those in our work — like these new principles for inclusive volunteer leadership , processes for effectively responding to Community Participation Guideline Reports and investment in Metrics that Matter for D&I in Open Source. To name only a few.

Let’s Work Together!

One thing we heard over, and over again in our research was the belief that to move the needle on diversity in open source — and tech overall, we must move to be more intentional in our collaboration across projects and communities.

To explore this concept, I’ll be sharing the insights from our recent D&I in FOSS survey in the launch of our first D&I in Open Source Community Call! February 28th 9AM PST (your time)

With over 200 projects represented, we learned a lot… not only from responses, but from the challenge of creating an inclusive survey — with privacy, respect and safety of people at center.

Please join this first call to help shape what comes next. What are you working on, what do you need help with? What speakers would you like to see invited?


We’ll be running this call using Vidyo (which has video and phone-in), as well as Telegram (for those with bandwidth issues, or who prefer text-based interactions). You can sign up here for a calendar invite. Agenda outline here. Please reach out directly to Emma (eirwin @ mozilla dot com)

FacebookTwitterGoogle+Share

QMOFirefox 59 Beta 6 Testday Results

Hello everyone,

As you may already know, last Friday – February 2nd – we held a new Testday event, for Firefox 59 Beta 6.

Thank you Adam, Nilam and Gabriela for helping us make Mozilla a better place.

From India QA Community team: Mohammed Adam, Aishwarya,  Fahima Zulfath A. and  Surentharan.R.A.

Results:
– several test cases executed for Form Autofill V2  and  Firefox address bar search suggestions features.

– 2 bugs verified: 10386951115976

– 1 new bug filed: 1435590.

Thanks for another successful testday 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

Mozilla VR BlogA-Painter performance optimizations

Introduction

A-Painter performance optimizations

A-Painter was the first demo we made using A-Frame. It was released more than a year ago, and it’s still one of the most popular WebVR experiences shown at meetups and exhibitions.

We wanted to show that the browser can deliver native like VR experiences and also push the limits of what A-Frame was able to do at the time. We’ve seen A-Painter being used more and more by professional artists and programmers that test the limits and extend its capabilities. Performance is the bottleneck that people first hit with moderately complex drawings, due to the increased number of strokes and geometry. On collaborative drawing experiences performance degrades even faster since you have multiple users adding geometry simultaneously.

Recently we had some bandwidth and rolled up our sleeves to implement some of the optimizations ideas that we had collected in the past (Issue #222, PR #241).

Draw calls simplified

There can be several causes of bad performance in your graphics application but looking at the number of draw calls it is a good starting point to investigate. .
A draw call is, as its name indicates, a call to a drawing function on the graphics API with the geometry and the material properties that we want to render. In WebGL it could be a call to gl.drawElements or gl.drawArrays. These calls are expensive so we want to keep them as low as possible.

We will use the Balrog scene by @feiss to measure the impact of the optimizations I’m proposing here.
The following image shows the statistics when rendering this scene on my computer (Windows i7 GTX1080):

A-Painter performance optimizations

We should identify which numbers affect our performance:

  • 14 textures: Every brush that needs a material (lines and stamps) creates its own texture. Instead, they could reuse an atlas with all the textures.
  • 542 entities: We created one entity per stroke.
  • 454 geometries: One entity per stroke means also one Object3D, Mesh, BufferGeometry and Material per stroke.
  • 450 calls: The number of draw calls we would like to optimize.

To reduce the number of draw calls we should:

  • Reduce the number of textures.
  • Reduce the number of materials (same reason).
  • Reduce the number of geometries by merging all the meshes we can into a bigger mesh, so we could use just one draw call to paint multiple geometries.

In the following sections we will go through these steps explaining how we could apply them to our application.

Materials

Atlas

The first step is to reduce the number of materials created as switching from one material to another will cause a new draw call.

We will start creating a new atlas containing all the textures for each brush, using the spritesheet.js tool, adding a new npm command called atlas using spritesheet-js to pack them:

"atlas": "spritesheet-js --name brush_atlas --path assets/images brushes/*.png"

If we now execute npm run atlas, it will create a png image with all our brush textures and a JSON file with the needed information to locate them inside the atlas.

A-Painter performance optimizations

The generated JSON is very easy to parse and includes the size of the generated atlas (meta.size) and the list of images included with their position on the atlas (frames).

{
    "meta": {
        "image": "brush_atlas.png",
        "size": {"w":3584,"h":2944},
        "scale": "1"
    },
    "frames": {
        "stamp_grass.png":
        {
            "frame": {"x":0,"y":128,"w":1536,"h":512},
            "rotated": false,
            "trimmed": false,
            "spriteSourceSize": {"x":0,"y":0,"w":1536,"h":512},
            "sourceSize": {"w":1536,"h":512}
        },
        "lines4.png":
        {
            "frame": {"x":0,"y":0,"w":2048,"h":128},
            "rotated": false,
            "trimmed": false,
            "spriteSourceSize": {"x":0,"y":0,"w":2048,"h":128},
            "sourceSize": {"w":2048,"h":128}
        },
        "stamp_fur2.png":
        {
            "frame": {"x":0,"y":640,"w":1536,"h":512},
            "rotated": false,
            "trimmed": false,
            "spriteSourceSize": {"x":0,"y":0,"w":1536,"h":512},
            "sourceSize": {"w":1536,"h":512}
},

Now we need a simple way to convert the local UV coordinates to the new coordinates inside the atlas space. To ease this task we will create a helper class called Atlas that will parse the generated JSON and will provide two functions to convert our UV coordinates.

function Atlas () {  
  this.map = new THREE.TextureLoader().load('assets/images/' + AtlasJSON.meta.image);
}

Atlas.prototype = {  
  getUVConverters (filename) {
    if (filename) {
      filename = filename.replace('brushes/', '');
      return {
        convertU (u) {
          var totalSize = AtlasJSON.meta.size;
          var data = AtlasJSON.frames[filename];
          if (u > 1 || u < 0) {
            u = 0;
          }
          return data.frame.x / totalSize.w + u * data.frame.w / totalSize.w;
        },

        convertV (v) {
          var totalSize = AtlasJSON.meta.size;
          var data = AtlasJSON.frames[filename];
          if (v > 1 || v < 0) {
            v = 0;
          }

          return 1 - (data.frame.y / totalSize.h + v * data.frame.h / totalSize.h);
        }
      };
    } else {
      return {
        convertU (u) { return u; },
        convertV (v) { return v; }
      };
    }
  }
};

Using this helper we could easily convert the following code:

material.map = “lines1.png”;  
uv[0].set(0, 0);  
uv[1].set(1, 1);  

Into this:

material.map = “atlas.png”;  
converter = atlas.getUVConverters(“lines1.png”);  
uvs[0].set( converter.convertU(0), converter.convertV(0) );  
uvs[1].set( converter.convertU(1), converter.convertV(1) );  

Thanks to the atlas technique, the number of textures in our app is reduced from 30 to 1.

Vertex colors

But we still have a problem: each stroke has a material and a custom value for material.color, making impossible to share the same material across all the strokes.
Fortunately, reusing the same material when just changing the color has a very simple solution: vertex colors.
We can define a specific color for each vertex of the geometry with vertex colors, that will be multiplied by the material color that is applied to the whole geometry. Since this value will be pure white (1, 1, 1), vertex colors will remain unaltered (We could any color other than white to tint the vertex colors).

In order to do that, we should set up the material to use vertex colors...

mainMaterial.vertexColors = THREE.VertexColors;  

...and create a new color buffer attribute for our mesh:

var colors = new Float32Array(this.maxBufferSize * 3);  
geometry.addAttribute('color', new THREE.BufferAttribute(colors, 3).setDynamic(true));


// Set everything red
var color = [1, 0 ,0];  
for (var i=0;i < numVertices; i++) {  
   colors[3 * i] = color[0];
   colors[3 * i + 1] = color[1];
   colors[3 * i + 2] = color[2];
}

geometry.attributes.uv.needsUpdate = true;  

Thus, the number of materials is reduced from the number of individual strokes (hundreds in one painting) to just 4: two physically-based render materials (THREE.MeshStandardMaterial), and two THREE.MeshBasicMaterial: one textured and one for solid colors.

Reducing the number of A-Frame entities

Initially every stroke generated a new entity, and each entity contains a mesh that it’s rendered separately on its own draw call. Our goal is to reduce these entities and meshes by merging all the strokes into one big mesh that could be rendered with just one draw call.

We were creating one entity for each stroke appending it to a <a-entity class=”a-drawing”> root entity:

// Get the root entity
var stroke = brushSystem.startNewStroke();

// Get the root entity
var drawing = document.querySelector('.a-drawing');

// Create a new entity for the current stroke
var entity = document.createElement('a-entity');  
entity.className = "a-stroke";  
drawing.appendChild(entity);  
entity.setObject3D('mesh', stroke.object3D);  
stroke.entity = entity;  

We could remove the entities per stroke overhead by adding directly the new stroke mesh to the root entity’s Object3D:

var stroke = brushSystem.startNewStroke();

// Create a new entity for the current stroke
drawing.object3D.add(stroke.object3D);  

and we would need to modify some pieces of code like the “undo” functionality to just remove the object3D instead of the entity. Although this is just a temporary step before we could get rid of these object3D per stroke by merging all of them and save many draw calls by sharing BufferGeometries.

Shared BufferGeometry

We have already reduced the number of textures, materials and entities but we still have an Object -> Mesh -> BufferGeometry per stroke, so we haven’t reduced yet the number of draw calls.
Ideally it would be better to have a very big BufferGeometry and keep adding vertices to it on each stroke, and just send it to the GPU saving plenty of draw calls.

For this purpose we will create a class called SharedBufferGeometry that will be instantiated by just passing the type of material we want to use.

function SharedBufferGeometry (material) {  
  this.material = material;

  this.maxBufferSize = 1000000; // an arbitrary high enough number of vertices
  this.geometries = [];
  this.currentGeometry = null;
  this.addBuffer();
}

addBuffer will create a Mesh with a BufferGeometry with the needed attributes and it will add it to the list of meshes the root Object3D has. It will contain some helper functions so we don’t need to care about indices or buffer overflows as it should be handed automatically.

 addVertex: function (x, y, z) {
   var buffer = this.currentGeometry.attributes.position;
   if (this.idx.position === buffer.count) {
     this.addBuffer(true);
     buffer = this.currentGeometry.attributes.position;
   }
   buffer.setXYZ(this.idx.position++, x, y, z);
 },

addColor: function (r, g, b) {  
   this.currentGeometry.attributes.color.setXYZ(this.idx.color++, r, g, b);
 },

 addNormal: function (x, y, z) {
   this.currentGeometry.attributes.normal.setXYZ(this.idx.normal++, x, y, z);
 },

 addUV: function (u, v) {
   this.currentGeometry.attributes.uv.setXY(this.idx.uv++, u, v);
 },

Actually we’ll need more than one BufferGeometry because some brushes don’t have textures so they don’t need UV attributes and/or are not affected by lighting so no need for normal attribute either.
We'll create SharedBufferGeometryManagerto handle all this buffers:

function SharedBufferGeometryManager () {  
 this.sharedBuffers = {};
}

SharedBufferGeometryManager.prototype = {  
 addSharedBuffer: function (name, material) {
   var bufferGeometry = new SharedBufferGeometry(material);
   this.sharedBuffers[name] = bufferGeometry;
 },

 getSharedBuffer: function (name) {
   return this.sharedBuffers[name];
 }
};

We will initialize the buffers by calling addSharedBuffer with the different types of materials we’ll be using:

sharedBufferGeometryManager.addSharedBuffer(‘unlit’, unlitMaterial);  
sharedBufferGeometryManager.addSharedBuffer(‘unlitTextured’, unlitTexturedMaterial);  
sharedBufferGeometryManager.addSharedBuffer(‘PBR’, pbrMaterial);  
sharedBufferGeometryManager.addSharedBuffer(PBRTextured’, pbrTexturedMaterial);  

And we will be ready to use them:

var sharedBuffer = sharedBufferGeometryManager.get(‘PBR’);  
sharedBuffer.addVertex(0, 1, 0);  
sharedBuffer.addVertex(1, 1, 0);  
sharedBuffer.addUV(0, 0);  

As we are storing several meshes without any connection we’ll be using single triangle soup. That means we’ll store 3 vertices for each triangle without sharing any of them. It will increase the bandwidth and memory requirements as we’re storing (NUM_POINTS_PER_STROKE * 2 * 3) floats for each stroke.
In the case of the Balrog scene using this method our final buffer will have 406.782 floats.

Triangle strips

In order to reduce the size of our BufferGeometry attributes, we could switch from triangle soup to triangle strips. Although using triangle strips in some situations could be challenging, using it when painting lines/ribbons is pretty straightforward: we just need to keep adding two new vertices at a time as the line grows.

A-Painter performance optimizations

Using triangle strips we’ll reduce the size of the position attribute array drastically.

  • Triangle soup: NUMBER_OF_STROKE_POINTS * 2 (triangles) * 3 (vertices) * 3 (xyz)
  • Triangle strip: NUMBER_OF_STROKE_POINTS * 2 (vertices)

As an example here are the position array sizes from the Balrog example:

  • Triangle soup: 135.594.
  • Triangle strip: 47.248.

At first it looks like an easy win, but if we look carefully we will realize that we need to fix some issues: we are sharing the same buffer to draw several unconnected strokes, but every time we add a new vertice it will create a new triangle using the two previous vertices, so all the strokes will be hideously connected:

A-Painter performance optimizations

We need a way to tell WebGL to skip that triangles, and although WebGL doesn’t support primitive restart we can do our own “primitive restart” by creating a degenerated triangle, which is a triangle with no area that is discarded by the GPU.

To create these degenerated triangles to separate two strokes we will just duplicate the last vertex from the first stroke and duplicate the first vertex on the next stroke.

A-Painter performance optimizations

So I create a restartPrimitive() function on the SharedBufferGeometry to duplicate the last vertex of the latest stroke, and once the first point of the next stroke is added, it has to be duplicated too.

The hardest part here is when working with indices as we need to keep track of the degenerated vertices so these position offsets have to be applied to the texture coordinates, colors and normals, too.

You can take a look at the Line Brush to see the whole implementation.

Results

In the following table we could see the statistics of the Balrog scene before and after the exposed optimizations:

Please note that these numbers also include common scene objects: floor, sky, controllers…

If we launch devtools we can see that we drastically reduced the animation frame time:

Before:
A-Painter performance optimizations

After:

A-Painter performance optimizations

Looking at the performance graphs it’s visible how the GC hit because of the allocations we did on the render loop. Also we’ve reduced considerably the CPU usage.

Before:

A-Painter performance optimizations

After:
A-Painter performance optimizations

The performance differences are visible both on Chrome and Firefox. For example the following is a graph of the latter where the fps drop to 2fps almost every two seconds.

Before:
A-Painter performance optimizations

After:
A-Painter performance optimizations

As demonstrated, the optimizations implemented have a major impact on A-Painter performance without adding too much complexity to the code. VR mode benefits the most. You can now paint for a long time without getting motion sickness: That’s a pretty good usability enhancement.

Further improvements

There are still many other optimizations that can further improve performance:

  • Find an alternative for a bug (or feature) when computing the bounding box or sphere when using BufferGeometry that will include always the unused vertices from the array (origin vertices at (0,0,0)) so the frustum culling won’t be as effective as it should.
  • Use geometry instancing on brushes that paint spheres and cubes, so we don’t create new geometry on each stroke but reuse the previously created geometry.
  • Use a LOD system to reduce the complexity of drawings based on distance. This could be useful if used on a big open environment like a social AR app.
  • Also on big open spaces it could be useful to implement some kind of space partitioning (Octrees or BVH) to improve culling and interaction on each stroke.

Mozilla VR BlogA-Painter performance optimizations

Introduction

A-Painter performance optimizations

A-Painter was the first demo we made using A-Frame. It was released more than a year ago, and it’s still one of the most popular WebVR experiences shown at meetups and exhibitions.

We wanted to show that the browser can deliver native like VR experiences and also push the limits of what A-Frame was able to do at the time. We’ve seen A-Painter being used more and more by professional artists and programmers that test the limits and extend its capabilities. Performance is the bottleneck that people first hit with moderately complex drawings, due to the increased number of strokes and geometry. On collaborative drawing experiences performance degrades even faster since you have multiple users adding geometry simultaneously.

Recently we had some bandwidth and rolled up our sleeves to implement some of the optimizations ideas that we had collected in the past (Issue #222, PR #241).

Draw calls simplified

There can be several causes of bad performance in your graphics application but looking at the number of draw calls it is a good starting point to investigate. .
A draw call is, as its name indicates, a call to a drawing function on the graphics API with the geometry and the material properties that we want to render. In WebGL it could be a call to gl.drawElements or gl.drawArrays. These calls are expensive so we want to keep them as low as possible.

We will use the Balrog scene by @feiss to measure the impact of the optimizations I’m proposing here.
The following image shows the statistics when rendering this scene on my computer (Windows i7 GTX1080):

A-Painter performance optimizations

We should identify which numbers affect our performance:

  • 14 textures: Every brush that needs a material (lines and stamps) creates its own texture. Instead, they could reuse an atlas with all the textures.
  • 542 entities: We created one entity per stroke.
  • 454 geometries: One entity per stroke means also one Object3D, Mesh, BufferGeometry and Material per stroke.
  • 450 calls: The number of draw calls we would like to optimize.

To reduce the number of draw calls we should:

  • Reduce the number of textures.
  • Reduce the number of materials (same reason).
  • Reduce the number of geometries by merging all the meshes we can into a bigger mesh, so we could use just one draw call to paint multiple geometries.

In the following sections we will go through these steps explaining how we could apply them to our application.

Materials

Atlas

The first step is to reduce the number of materials created as switching from one material to another will cause a new draw call.

We will start creating a new atlas containing all the textures for each brush, using the spritesheet.js tool, adding a new npm command called atlas using spritesheet-js to pack them:

"atlas": "spritesheet-js --name brush_atlas --path assets/images brushes/*.png"

If we now execute npm run atlas, it will create a png image with all our brush textures and a JSON file with the needed information to locate them inside the atlas.

A-Painter performance optimizations

The generated JSON is very easy to parse and includes the size of the generated atlas (meta.size) and the list of images included with their position on the atlas (frames).

{
	"meta": {
		"image": "brush_atlas.png",
		"size": {"w":3584,"h":2944},
		"scale": "1"
	},
	"frames": {
		"stamp_grass.png":
		{
			"frame": {"x":0,"y":128,"w":1536,"h":512},
			"rotated": false,
			"trimmed": false,
			"spriteSourceSize": {"x":0,"y":0,"w":1536,"h":512},
			"sourceSize": {"w":1536,"h":512}
		},
		"lines4.png":
		{
			"frame": {"x":0,"y":0,"w":2048,"h":128},
			"rotated": false,
			"trimmed": false,
			"spriteSourceSize": {"x":0,"y":0,"w":2048,"h":128},
			"sourceSize": {"w":2048,"h":128}
		},
		"stamp_fur2.png":
		{
			"frame": {"x":0,"y":640,"w":1536,"h":512},
			"rotated": false,
			"trimmed": false,
			"spriteSourceSize": {"x":0,"y":0,"w":1536,"h":512},
			"sourceSize": {"w":1536,"h":512}
},

Now we need a simple way to convert the local UV coordinates to the new coordinates inside the atlas space. To ease this task we will create a helper class called Atlas that will parse the generated JSON and will provide two functions to convert our UV coordinates.

function Atlas () {  
  this.map = new THREE.TextureLoader().load('assets/images/' + AtlasJSON.meta.image);
}

Atlas.prototype = {  
  getUVConverters (filename) {
    if (filename) {
      filename = filename.replace('brushes/', '');
      return {
        convertU (u) {
          var totalSize = AtlasJSON.meta.size;
          var data = AtlasJSON.frames[filename];
          if (u > 1 || u < 0) {
            u = 0;
          }
          return data.frame.x / totalSize.w + u * data.frame.w / totalSize.w;
        },

        convertV (v) {
          var totalSize = AtlasJSON.meta.size;
          var data = AtlasJSON.frames[filename];
          if (v > 1 || v < 0) {
            v = 0;
          }

          return 1 - (data.frame.y / totalSize.h + v * data.frame.h / totalSize.h);
        }
      };
    } else {
      return {
        convertU (u) { return u; },
        convertV (v) { return v; }
      };
    }
  }
};

Using this helper we could easily convert the following code:

material.map = “lines1.png”;
uv[0].set(0, 0);
uv[1].set(1, 1);

Into this:

material.map = “atlas.png”;
converter = atlas.getUVConverters(“lines1.png”);
uvs[0].set( converter.convertU(0), converter.convertV(0) );
uvs[1].set( converter.convertU(1), converter.convertV(1) );

Thanks to the atlas technique, the number of textures in our app is reduced from 30 to 1.

Vertex colors

But we still have a problem: each stroke has a material and a custom value for material.color, making impossible to share the same material across all the strokes.
Fortunately, reusing the same material when just changing the color has a very simple solution: vertex colors.
We can define a specific color for each vertex of the geometry with vertex colors, that will be multiplied by the material color that is applied to the whole geometry. Since this value will be pure white (1, 1, 1), vertex colors will remain unaltered (We could any color other than white to tint the vertex colors).

In order to do that, we should set up the material to use vertex colors...

mainMaterial.vertexColors = THREE.VertexColors;

...and create a new color buffer attribute for our mesh:

var colors = new Float32Array(this.maxBufferSize * 3);
geometry.addAttribute('color', new THREE.BufferAttribute(colors, 3).setDynamic(true));
 

// Set everything red
var color = [1, 0 ,0];
for (var i=0;i < numVertices; i++) {
   colors[3 * i] = color[0];
   colors[3 * i + 1] = color[1];
   colors[3 * i + 2] = color[2];
}

geometry.attributes.uv.needsUpdate = true;

Thus, the number of materials is reduced from the number of individual strokes (hundreds in one painting) to just 4: two physically-based render materials (THREE.MeshStandardMaterial), and two THREE.MeshBasicMaterial: one textured and one for solid colors.

Reducing the number of A-Frame entities

Initially every stroke generated a new entity, and each entity contains a mesh that it’s rendered separately on its own draw call. Our goal is to reduce these entities and meshes by merging all the strokes into one big mesh that could be rendered with just one draw call.

We were creating one entity for each stroke appending it to a <a-entity class=”a-drawing”> root entity:

// Get the root entity
var stroke = brushSystem.startNewStroke();

// Get the root entity
var drawing = document.querySelector('.a-drawing');

// Create a new entity for the current stroke
var entity = document.createElement('a-entity');
entity.className = "a-stroke";
drawing.appendChild(entity);
entity.setObject3D('mesh', stroke.object3D);
stroke.entity = entity;

We could remove the entities per stroke overhead by adding directly the new stroke mesh to the root entity’s Object3D:

var stroke = brushSystem.startNewStroke();

// Create a new entity for the current stroke
drawing.object3D.add(stroke.object3D);

and we would need to modify some pieces of code like the “undo” functionality to just remove the object3D instead of the entity. Although this is just a temporary step before we could get rid of these object3D per stroke by merging all of them and save many draw calls by sharing BufferGeometries.

Shared BufferGeometry

We have already reduced the number of textures, materials and entities but we still have an Object -> Mesh -> BufferGeometry per stroke, so we haven’t reduced yet the number of draw calls.
Ideally it would be better to have a very big BufferGeometry and keep adding vertices to it on each stroke, and just send it to the GPU saving plenty of draw calls.

For this purpose we will create a class called SharedBufferGeometry that will be instantiated by just passing the type of material we want to use.

function SharedBufferGeometry (material) {
  this.material = material;

  this.maxBufferSize = 1000000; // an arbitrary high enough number of vertices
  this.geometries = [];
  this.currentGeometry = null;
  this.addBuffer();
}

addBuffer will create a Mesh with a BufferGeometry with the needed attributes and it will add it to the list of meshes the root Object3D has.
It will contain some helper functions so we don’t need to care about indices or buffer overflows as it should be handed automatically.

 addVertex: function (x, y, z) {
   var buffer = this.currentGeometry.attributes.position;
   if (this.idx.position === buffer.count) {
     this.addBuffer(true);
     buffer = this.currentGeometry.attributes.position;
   }
   buffer.setXYZ(this.idx.position++, x, y, z);
 },

addColor: function (r, g, b) {
   this.currentGeometry.attributes.color.setXYZ(this.idx.color++, r, g, b);
 },

 addNormal: function (x, y, z) {
   this.currentGeometry.attributes.normal.setXYZ(this.idx.normal++, x, y, z);
 },

 addUV: function (u, v) {
   this.currentGeometry.attributes.uv.setXY(this.idx.uv++, u, v);
 },

Actually we’ll need more than one BufferGeometry because some brushes don’t have textures so they don’t need UV attributes and/or are not affected by lighting so no need for normal attribute either.
We'll create SharedBufferGeometryManagerto handle all this buffers:

function SharedBufferGeometryManager () {
 this.sharedBuffers = {};
}

SharedBufferGeometryManager.prototype = {
 addSharedBuffer: function (name, material) {
   var bufferGeometry = new SharedBufferGeometry(material);
   this.sharedBuffers[name] = bufferGeometry;
 },

 getSharedBuffer: function (name) {
   return this.sharedBuffers[name];
 }
};

We will initialize the buffers by calling addSharedBuffer with the different types of materials we’ll be using:

sharedBufferGeometryManager.addSharedBuffer(‘unlit’, unlitMaterial);
sharedBufferGeometryManager.addSharedBuffer(‘unlitTextured’, unlitTexturedMaterial);
sharedBufferGeometryManager.addSharedBuffer(‘PBR’, pbrMaterial);
sharedBufferGeometryManager.addSharedBuffer(PBRTextured’, pbrTexturedMaterial);

And we will be ready to use them:

var sharedBuffer = sharedBufferGeometryManager.get(‘PBR’);
sharedBuffer.addVertex(0, 1, 0);
sharedBuffer.addVertex(1, 1, 0);
sharedBuffer.addUV(0, 0);

As we are storing several meshes without any connection we’ll be using single triangle soup. That means we’ll store 3 vertices for each triangle without sharing any of them. It will increase the bandwidth and memory requirements as we’re storing (NUM_POINTS_PER_STROKE * 2 * 3) floats for each stroke.
In the case of the Balrog scene using this method our final buffer will have 406.782 floats.

Triangle strips

In order to reduce the size of our BufferGeometry attributes, we could switch from triangle soup to triangle strips. Although using triangle strips in some situations could be challenging, using it when painting lines/ribbons is pretty straightforward: we just need to keep adding two new vertices at a time as the line grows.

A-Painter performance optimizations

Using triangle strips we’ll reduce the size of the position attribute array drastically.

  • Triangle soup: NUMBER_OF_STROKE_POINTS * 2 (triangles) * 3 (vertices) * 3 (xyz)
  • Triangle strip: NUMBER_OF_STROKE_POINTS * 2 (vertices)

As an example here are the position array sizes from the Balrog example:

  • Triangle soup: 135.594.
  • Triangle strip: 47.248.

At first it looks like an easy win, but if we look carefully we will realize that we need to fix some issues: we are sharing the same buffer to draw several unconnected strokes, but every time we add a new vertice it will create a new triangle using the two previous vertices, so all the strokes will be hideously connected:

A-Painter performance optimizations

We need a way to tell WebGL to skip that triangles, and although WebGL doesn’t support primitive restart we can do our own “primitive restart” by creating a degenerated triangle, which is a triangle with no area that is discarded by the GPU.

To create these degenerated triangles to separate two strokes we will just duplicate the last vertex from the first stroke and duplicate the first vertex on the next stroke.

A-Painter performance optimizations

So I create a restartPrimitive() function on the SharedBufferGeometry to duplicate the last vertex of the latest stroke, and once the first point of the next stroke is added, it has to be duplicated too.

The hardest part here is when working with indices as we need to keep track of the degenerated vertices so these position offsets have to be applied to the texture coordinates, colors and normals, too.

You can take a look at the Line Brush to see the whole implementation.

Results

In the following table we could see the statistics of the Balrog scene before and after the exposed optimizations:

Please note that these numbers also include common scene objects: floor, sky, controllers…

If we launch devtools we can see that we drastically reduced the animation frame time:

Before:
A-Painter performance optimizations

After:

A-Painter performance optimizations

Looking at the performance graphs it’s visible how the GC hit because of the allocations we did on the render loop. Also we’ve reduced considerably the CPU usage.

Before:

A-Painter performance optimizations

After:
A-Painter performance optimizations

The performance differences are visible both on Chrome and Firefox. For example the following is a graph of the latter where the fps drop to 2fps almost every two seconds.

Before:
A-Painter performance optimizations

After:
A-Painter performance optimizations

As demonstrated, the optimizations implemented have a major impact on A-Painter performance without adding too much complexity to the code. VR mode benefits the most. You can now paint for a long time without getting motion sickness: That’s a pretty good usability enhancement.

Further improvements

There are still many other optimizations that can further improve performance:

  • Find an alternative for a bug (or feature) when computing the bounding box or sphere when using BufferGeometry that will include always the unused vertices from the array (origin vertices at (0,0,0)) so the frustum culling won’t be as effective as it should.
  • Use geometry instancing on brushes that paint spheres and cubes, so we don’t create new geometry on each stroke but reuse the previously created geometry.
  • Use a LOD system to reduce the complexity of drawings based on distance. This could be useful if used on a big open environment like a social AR app.
  • Also on big open spaces it could be useful to implement some kind of space partitioning (Octrees or BVH) to improve culling and interaction on each stroke.