The Mozilla BlogAV1 and the Video Wars of 2027

Author’s Note: This post imagines a dystopian future for web video, if we continue to rely on patented codecs to transmit media files. What if one company had a perpetual monopoly on those patents? How could it limit our access to media and culture? The premise of this cautionary tale is grounded in fact. However, the future scenario is fiction, and the entities and events portrayed are not intended to represent real people, companies, or events.

Illustration by James Dybvig

This post was originally published on Mozilla's Hacks blog.

The year is 2029. It’s been two years since the start of the Video Wars, and there’s no end in sight. It’s hard to believe how deranged things have become on earth. People are going crazy because they can’t afford web video fees – and there’s not much else to do. The world’s media giants have irrevocably twisted laws and governments to protect their incredibly lucrative franchise: the right to own their intellectual property for all time.

It all started decades ago, with an arcane compression technology and a cartoon mouse. As if we needed any more proof that truth is stranger than fiction.

Adulteration of the U.S. Legal System

In 1998, the U.S. Congress passed the Sonny Bono Copyright Term Extension Act. This new law extended copyrights on corporate works to the author’s lifetime plus 95 years. The effort was driven by the Walt Disney Company, to protect its lucrative retail franchise around the animated character Mickey Mouse. Without this extension, Mickey would have entered the public domain, meaning anyone could create new cartoons and merchandise without fear of being sued by Disney. When the extension passed, it gave Disney another 20 years to profit from Mickey. The news sparked outrage from lawyers and academics at the time, but it was a dull and complex topic that most people didn’t understand or care about.

In 2020, Disney again lobbied to extend the law, so its copyright would last for 10,000 years. Its monopoly on our culture was complete. No art, music, video, or story would pass into the public domain for millennia. All copyrighted ideas would remain the private property of corporations. The quiet strangulation of our collective creativity had begun.

A small but powerful corporate collective called MalCorp took note of Disney’s success. Backed by deep-pocketed investors, MalCorp had quietly started buying the technology patents that made video streaming work over the internet. It revealed itself in 2021 as a protector of innovation. But its true goal was to create a monopoly on video streaming technology that would last forever, to shunt profits to its already wealthy investors. It was purely an instrument of greed.

Better Compression for Free

Now, there were some good guys in this story. As early as 2007, prescient tech companies wanted the web platform to remain free and open to all – especially for video. Companies like Cisco, Mozilla, Google, and others worked on new video codecs that could replace the patented, ubiquitous H.264 codec. They even combined their efforts in 2015 to create a royalty-free codec called AV1 that anyone could use free of charge.

AV1 was notable in that it offered better compression, and therefore better video quality, than any other codec of its time. But just as the free contender was getting off the ground, the video streaming industry was thrown into turmoil. Browser companies backed different codecs, and the market fragmented. Adoption stalled, and for years the streaming industry continued paying licensing fees for subpar codecs, even though better options were available.

The End of Shared Innovation

Meanwhile MalCorp found a way to tweak the law so its patents would never expire. It proposed a special amendment, just for patent pools, that said: Any time any part of any patent changes, the entire pool is treated as a new invention under U.S. law. With its deep pockets, MalCorp was able to buy the votes needed to get its law passed.

MalCorp’s patents would not expire. Not in 20 years. Not ever. And because patent law is about as interesting as copyright law, few protested the change.

Things went downhill quickly for advocates of the open web. MalCorp’s patents became broader, vaguer, ever-changing. With billions in its war chest, MalCorp was able to sue royalty-free codecs like AV1 out of existence. MalCorp had won. It had a monopoly on web streaming technology. It began, slowly at first, to raise licensing fees.

Gorgeous Video, Crushing Fees

For those who could afford it, web video got much better. MalCorp’s newest high-efficiency video codecs brought pixel-perfect 32K-Strato-Def images and 3D sound into people’s homes. Video and audio were clear and rich – better than real life. Downloads were fast. Images were crisp and spectacular. Fees were high.

Without access to any competing technologies, streaming companies had to pay billions instead of millions a year to MalCorp. Streaming services had to 100x their prices to cover their costs. Monthly fees rose to $4,500. Even students had to pay $50 a minute to watch a lecture on YouTube. Gradually, the world began to wake up to what MalCorp had done.

Life Indoors

By the mid-twenties, the Robotic Age had put most people out of work. The lucky ones lived on fixed incomes, paid by their governments. Humans were only needed for specialized service jobs, like nursery school teachers and style consultants. Even doctors were automated, using up-to-the-minute, crowd-sourced data to diagnose disease and track trends and outbreaks.

People were idle. Discontent was rising. Where once a retired workforce might have traveled or pursued hobbies, growing environmental problems rendered the outside world mostly uninhabitable. People hiked at home with their headsets on, enjoying stereoscopic birdsong and the idea of a fresh breeze. We lived indoors, in front of screens.

Locked In, Locked Out

It didn’t take long for MalCorp to become the most powerful corporation in the world. When video and mixed reality files made up 90 percent of all internet traffic, MalCorp was collecting on every transmission. Still, its greed kept growing.

Fed up with workarounds like piracy sites and peer-to-peer networks, MalCorp dismantled all legacy codecs. The slow, furry, lousy videos that were vaguely affordable ceased to function on modern networks and devices. People noticed when the signal went dark. Sure, there was still television and solid state media, but it wasn’t the same. Soon enough, all hell broke loose.

The Wars Begin

During Super Bowl LXII, football fans firebombed police stations in 70 cities, because listening to the game on radio just didn’t cut it. Thousands died in the riots and, later, in the crackdowns. Protesters picketed Disneyland, because the people had finally figured out what had happened to their democracy, and how it got started.

For the first time in years, people began to organize. They joined chat rooms and formed political parties like VidPeace and YouStream, vying for a majority. They had one demand: Give us back free video on the open web. They put banners on their vid-free Facebook feeds, advocating for the liberation of web video from greedy patent holders. They rallied around an inalienable right, once taken for granted, to be able to make and watch and share their own family movies, without paying MalCorp’s fees.

But it was too late. The opportunity to influence the chain of events had ended years before. Some say the tipping point was in 2019. Others blame the apathy and naiveté of early web users, who assumed tech companies and governments would always make decisions that served the common good. That capitalism would deliver the best services, in spite of powerful profit motives. And that the internet would always be free.

The post AV1 and the Video Wars of 2027 appeared first on The Mozilla Blog.

Mozilla Addons BlogVolunteer Add-on Reviewer Applications Open

Thousands of volunteers around the world contribute to Mozilla projects in a variety of capacities, and extension review is one of them. Reviewers check extensions submitted to addons.mozilla.org (AMO) for their safety, security, and adherence to Mozilla’s Add-on Policies.

Last year, we paused onboarding new volunteer extension reviewers while we updated the add-on policies and review processes to address changes introduced by the transition to the WebExtensions API and the new post-review process.

Now that the policies, processes and guidelines have been refreshed, we are re-opening applications for our volunteer reviewer program. If you are a skilled JavaScript developer, have experience developing browser extensions, and are interested in helping to keep the extension ecosystem safe and healthy, please consider contributing as a volunteer reviewer. You can learn more about the add-on reviewer program here.

If you are interested, please check out our wiki to learn how to apply. We will follow up with applicants shortly.

The post Volunteer Add-on Reviewer Applications Open appeared first on Mozilla Add-ons Blog.

The Mozilla BlogMozilla files arguments against the FCC – latest step in fight to save net neutrality

Today, Mozilla is filing our brief in Mozilla v. FCC – alongside other companies, trade groups, states, and organizations – to defend net neutrality rules against the FCC’s rollback that went into effect early this year. For the first time in the history of the public internet, the FCC has disavowed interest and authority to protect users from ISPs, who have both the incentives and means to interfere with how we access online content.

We are proud to be a leader in the fight for net neutrality both through our legal challenge in Mozilla v. FCC and through our deep work in education and advocacy for an open, equal, accessible internet. Users need to know that their access to the internet is not being blocked, throttled, or discriminated against. That means that the FCC needs to accept statutory responsibility in protecting those user rights — a responsibility that every previous FCC has supported until now. That’s why we’re suing to stop them from abdicating their regulatory role in protecting the qualities that have made the internet the most important communications platform in history.

This case is about your rights to access content and services online without your ISP blocking, throttling, or discriminating against your favorite services. Unfortunately, the FCC made this a political issue and followed party-lines rather than protecting your right to an open internet in the US. Our brief highlights how this decision is just completely flawed:

– The FCC order fundamentally mischaracterizes how internet access works. Whether based on semantic contortions or simply an inherent lack of understanding, the FCC asserts that ISPs simply don’t need to deliver websites you request without interference.
– The FCC completely renounces its enforcement ability and tries to delegate that authority to other agencies but only Congress can grant that authority, the FCC can’t decide it’s just not its job to regulate telecommunications services and promote competition.
– The FCC ignored the requirement to engage in a “reasoned decision making” process, ignoring much of the public record as well as their own data showing that consumers lack competitive choices for internet access, which gives ISPs the means to harm access to content and services online.

Additional Mozilla v. FCC briefs will be filed by various parties who are intervening or friends of the court through November. After that process is complete, oral arguments will take place and the court will rule.

Mozilla has been defending users’ access to the internet without interference from gatekeepers for almost a decade, both in the US and globally. Net neutrality is a core characteristic of the internet as we know it, and crucial for the economy and everyday lives. It is imperative that all internet traffic be treated equally, without discrimination against content or type of traffic — that’s how the internet was built and what has made it one of the greatest inventions of all time.

Brief below:

(As filed) Initial NG Petitioners Brief – Mozilla v FCC 20Aug2018

The post Mozilla files arguments against the FCC – latest step in fight to save net neutrality appeared first on The Mozilla Blog.

Nick CameronRLS 1.0 release candidate

The current version of the Rust Language Server (RLS), 0.130.5, is the first 1.0 release candidate. It is available on nightly and beta channels, and from the 3rd September will be available with stable Rust.

1.0 for the RLS is a somewhat arbitrary milestone. We think the RLS can handle most small and medium size projects (notable, it doesn't work with Rust itself, but that is large and has a very complex build system), and we think it is release quality. However there are certainly limitations and many planned improvements.

It would be really useful if you could help us test the release candidate! Please report any crashes, or projects where the RLS gives no information or any bugs where it gives incorrect information.

The easiest way to install the RLS is to install an extension for your favourite editor, for example:

For most editors you will only need to have Rustup installed and the editor will install the rest.

What to expect

Syntax highlighting

Each editor does its own syntax highlighting

Code completion

Code completion is syntactic, performed by Racer. Because it is syntactic there are many instances where it is incomplete or incorrect. However, we believe it is useful.

Errors and warnings

Errors and other diagnostics are displayed inline. Exactly how the errors are presented depends on the editor.

Formatting

By Rustfmt (which is also at the 1.0 release candidate stage).

Clippy

Clippy is installed as part of the RLS. You can turn it on with a setting in your editor or with the usual crate attribute.

Code intelligence

The RLS can do the following:

  • type and docs on hover (and sometimes signature info)
  • goto definition
  • find all references
  • find all implementations for traits and concrete types
  • find all symbols in the file/project
  • renaming (this will not work where a renaming would cause an error, such as where the field initialisation syntax is used)
  • change glob imports to list imports

These features will work for most identifiers, but won't work where identifiers are defined in a macro (and sometimes when used in a macro use). They also won't work for identifiers in module paths, except for the last part, e.g., in foo::bar::baz, the RLS has information about baz, but not foo or bar.

Daniel StenbergProject curl governance

Over time, we've slowly been adjusting the curl project and its documentation so that we might at some point actually qualify to the CII open source Best Practices at silver level.

We qualified at the base level a while ago as one of the first projects which did that.

Recently, one of those issues we fixed was documenting the governance of the curl project. How exactly the curl project is run, what the key roles are and how decisions are made. That document is now in our git repo.

curl

The curl project is what I would call a fairly typical smallish open source project with a quite active and present project leader (me). We have a small set of maintainers who independently are allowed to and will merge commits to git (via pull-requests).

Any decision or any code change that was done or is about to be done can be brought up for questioning or discussion on the mailing list. Nothing is ever really seriously written in stone (except our backwards compatible API). If we did the wrong decision in the past, we should reconsider now.

Oh right, we also don't have any legal entity. There's no company or organization behind this or holding any particular rights. We're not part of any umbrella organization. We're all just individuals distributed over the globe.

Contributors

No active contributor or maintainer (that I know of) gets paid to work on curl regularly. No company has any particular say or weight to decide where the project goes next.

Contributors fix bugs and add features as part of our daily jobs or in their spare time. We get code submissions for well over a hundred unique authors every year.

Dictator

As a founder of the project and author of more than half of all commits, I am what others call, a Benevolent Dictator. I can veto things and I can merge things in spite of objections, although I avoid that as far as possible.

I feel that I generally have people's trust and that the community expects me to be able to take decisions and drive this project in an appropriate direction, in a fashion that has worked out fine for the past twenty years.

I post all my patches (except occasional minuscule changes) as pull-requests on github before merge, to allow comments, discussions, reviews and to make sure they don't break any tests.

I announce and ask for feedback for changes or larger things that I want to do, on the mailing list for wider attention. To bring up discussions and fish for additional ideas or for people to point out obvious mistakes. May times, my calls for opinions or objections are met with silence and I will then take that as "no objections" and more forward in a way I deem sensible.

Every now and then I blog about specific curl features or changes we work on, to highlight them and help out the user community "out there" to discover and learn what curl can do, or might be able to do soon.

I'm doing this primarily on my spare time. My employer also lets me spend some work hours on curl.

Long-term

One of the prime factors that has made curl and libcurl successful and end up one of the world's most widely used software components, I'm convinced, is that we don't break stuff.

By this I mean that once we've introduced functionality, we struggle hard to maintain that functionality from that point on and into the future. When we accept code and features into the project, we do this knowing that the code will likely remain in our code for decades to come. Once we've accepted the code, it becomes our responsibility and now we'll care for it dearly for a long time forward.

Since we're so few developers and maintainers in the project, I can also add that I'm very much aware that in many cases adopting code and merging patches mean that I will have to fix the remaining bugs and generally care for the code the coming years.

Changing governance?

I'm dictator of the curl project for practical reasons, not because I consider it an ideal way to run projects. If there were more people involved who cared enough about what and how we're doing things we could also change how we run the project.

But until I sense such an interest, I don't think the current model is bad - and our conquering the world over the recent years could also be seen as a proof that the project at least sometimes also goes in a direction that users approve of. And we are after all best practices certified.

I realize I come off sounding like a real-world dictator when I say things like this, but I genuinely believe that our governance is based on necessity and what works, not because we have to do it this way.

I've run the project since its inception 1998. One day I'll get bored or get run over by a bus. Then at the very least will the project need another way to run...

Silver level?

We're only two requirements away from Best Practices Silver level compliance and we've been discussing a bit lately (or perhaps: I've asked the question) whether the last criteria are actually worth the trouble for us or not.

  1. We need to enforce "Signed-off-by" lines in commits to maintain Developers Certificate of origin. This is easy in itself and I've only held this off this long because we've had zero interest or requirements for this from contributors and users. Added administration for little gain.
  2. We're asked to provide an assurance case: "a description of the threat model, clear identification of trust boundaries, an argument that secure design principles have been applied, and an argument that common implementation security weaknesses have been countered." - This is work we haven't done and a document we don't have. And again: nobody has actually ever asked for this outside of this certificate form.

Do you think we should put in the extra effort and check off the final two requirements as well? Do you think they actually make the project better?

Tim TaubertBitslicing With Karnaugh Maps

Bitslicing, in cryptography, is the technique of converting arbitrary functions into logic circuits, thereby enabling fast, constant-time implementations of cryptographic algorithms immune to cache and timing-related side channel attacks.

My last post Bitslicing, An Introduction showed how to convert an S-box function into truth tables, then into a tree of multiplexers, and finally how to find the lowest possible gate count through manual optimization.

Today’s post will focus on a simpler and faster method. Karnaugh maps help simplifying Boolean algebra expressions by taking advantage of humans’ pattern-recognition capability. In short, we’ll bitslice an S-box using K-maps.

A tiny S-box

Here again is the 3-to-2-bit S-box function from the previous post.

uint8_t SBOX[] = { 1, 0, 3, 1, 2, 2, 3, 0 };

An AES-inspired S-box that interprets three input bits as a polynomial in GF(23) and computes its inverse mod P(x) = x3 + x2 + 1, with 0-1 := 0. The result plus (x2 + 1) is converted back into bits and the MSB is dropped.

This S-box can be represented as a function of three Boolean variables, where f(0,0,0) = 0b01, f(0,0,1) = 0b00, f(0,1,0) = 0b11, etc. Each output bit can be represented by its own Boolean function where fL(0,0,0) = 0 and fR(0,0,0) = 1, fL(0,0,1) = 0 and fR(0,0,1) = 0, …

A truth table per output bit

Each output bit has its own Boolean function, and therefore also its own thruth table. Here are the truth tables for the Boolean functions fL(a,b,c) and fR(a,b,c):

 abc | SBOX            abc | f_L()         abc | f_R()
-----|------          -----|-------       -----|-------
 000 | 01              000 | 0             000 | 1
 001 | 00              001 | 0             001 | 0
 010 | 11              010 | 1             010 | 1
 011 | 01     --->     011 | 0      +      011 | 1
 100 | 10              100 | 1             100 | 0
 101 | 10              101 | 1             101 | 0
 110 | 11              110 | 1             110 | 1
 111 | 00              111 | 0             111 | 0

Whereas previously at this point we built a tree of multiplexers out of each truth table, we’ll now build a Karnaugh map (K-map) per output bit.

Karnaugh Maps

The values of fL(a,b,c) and fR(a,b,c) are transferred onto a two-dimensional grid with the cells ordered in Gray code. Each cell position represents one possible combination of input bits, while each cell value represents the value of the output bit.

The row and column indices (a) and (b || c) are ordered in Gray code rather than binary numerical order to ensure only a single variable changes between each pair of adjacent cells. Otherwise, products of predicates (a & b, a & c, …) would scatter.

These products are what you want to find to get a minimum length representation of the truth function. If the output bit is the same at two adjacent cells, then it’s independent of one of the two input variables, because (a & ~b) | (a & b) = a.

Spotting patterns

The heart of simplifying Boolean expressions via K-maps is finding groups of adjacent cells with value 1. The rules are as follows:

  • Groups are rectangles of 2n cells with value 1.
  • Groups may not include cells with value 0.
  • Each cell with value 1 must be in at least one group.
  • Groups may be horizontal or vertical, not diagonal.
  • Each group should be as large as possible.
  • There should be as few groups as possible.
  • Groups may overlap.

First, we mark all cells with value 1. We then form a red group for the two horizontal groups of size 21. The two vertical groups are marked with green, also of size 21.

On fR’s K-map on the right, the red and green group overlap. As per the rules above, that’s perfectly fine. The cell at abc=110 can’t be without a group and we’re instructed to form the largest groups possible, so they overlap.

But wait, you say, what’s going on with the blue rectangle on the right?

Wrapping around

A somewhat unexpected property of K-maps is that they’re not really grids, but actually toruses. In plain English: they wrap around the top, bottom, and the sides.

Look at this neat animation on Wikipedia that demonstrates how a rectangle can turn into a donuttorus. Adjacent thus has a special definition here: cells on the very right touch those on the far left, as do those at the very top and bottom.

Another way to understand this property is to imagine that the columns don’t start at 00 but rather at 01, and so we rotate the whole K-map by one to the left. Then the rectangles wouldn’t need to wrap around and they would all fit on the grid nicely.

Now that all cells with a 1 have been assigned to as few groups as possible, let’s get our hands dirty and write some code.

A bitsliced SBOX() function

K-maps are read groupwise: we look at each cell’s position and focus on the input values that do not change throughout the group. Values that do change are ignored.

One function for fL(a,b,c) ...

The red group covers the cells at position 100 and 101. The values a=1 and b=0 are constant, they will be included into the group’s term. The value of c changes and is therefore irrelevant. The term is (a & ~b).

The green group covers the cells at 010 and 110. We ignore a, and include b=1 and c=0. The term is (b & ~c).

SBOXL() is the disjunction of the group terms we collected from the K-map. It lists all possible combinations of input values that lead to output value 1.

uint8_t SBOXL(uint8_t a, uint8_t b, uint8_t c) {
  return (a & ~b) | (b & ~c);
}

... and another one for fR(a,b,c)

The red group covers the cells at 011 and 010. The term is (~a & b).

The green group covers the cells at 010 and 110. The term is (b & ~c).

The blue group covers the cells at 000 and 010. The term is (~a & ~c).

uint8_t SBOXR(uint8_t a, uint8_t b, uint8_t c) {
  return (~a & b) | (b & ~c) | (~a & ~c);
}

Great, that’s all we need! Now we can merge those two functions and compare that to the result of the previous post.

Putting it all together

The first three variables ensure that we negate inputs only once. t0 replaces the common subexpression b & nc. Any optimizing compiler would do the same.

void SBOX(uint8_t a, uint8_t b, uint8_t c, uint8_t* l, uint8_t* r) {
  uint8_t na = ~a;
  uint8_t nb = ~b;
  uint8_t nc = ~c;

  uint8_t t0 = b & nc;

  *l = (a & nb) | t0;
  *r = (na & b) | (na & nc) | t0;
}

Ten gates. That’s one more than the manually optimized version from the last post. What’s missing? Turns out that K-maps sometimes don’t yield the minimal form and we have to simplify further by taking out common factors.

The conjunctions in the term (na & b) | (na & nc) have the common factor na and, due to the Distributivity Law, can be rewritten as na & (b | nc). That removes one of the AND gates and leaves two.

void SBOX(uint8_t a, uint8_t b, uint8_t c, uint8_t* l, uint8_t* r) {
  uint8_t na = ~a;
  uint8_t nb = ~b;
  uint8_t nc = ~c;

  uint8_t t0 = b & nc;
  uint8_t t1 = b | nc;

  *l = (a & nb) | t0;
  *r = (na & t1) | t0;
}

Nine gates. That’s exactly what we achieved by tedious artisanal optimization.

Summing up

K-maps are neat and trivial to use once you’ve worked through an example yourself. They yield minimal circuits fast, compared to manual optimization where the effort grows exponentially with the number of terms.

There is one downside though, and it’s that the original variant of a K-map can’t be used with more than four input variables. There are variants that do work with more than four variables but they actually make it harder to spot groups visually.

The Quine–McCluskey algorithm is functionally identical to K-maps but can handle an arbitrary number of input variables in its original variant – although the running time grows exponentially with the number of variables. Not too problematic for us, S-boxes usually don’t have too many inputs anyway…

Mozilla VR BlogThis Week in Mixed Reality: Issue 16

This Week in Mixed Reality: Issue 16

It's mostly more bug fixes this week, and starting on some cool new features, but first we want to tell you about an exciting competition that launched this week.

On Monday Andrzej Mazur launched the 2018 edition of the JS13KGames competition. As the name suggests, you have to create a game using only thirteen kilobytes of Javascript (zipped) or less. Check out some of last year's winners to see what is possible in 13k.

This year Mozilla is sponsoring the new WebXR category, which lets you use A-Frame or Babylon.js without counting towards the 13k. See the full rules for details. Prizes this year includes the Oculus Go for the top three champions.

Browsers

We demoed Firefox Reality at the Mozilla Gigabit event in Mountain View on 8/15. The Mozilla Gigabit Community Fund provides grant funding in select U.S. communities to support pilot tests of gigabit technologies such as virtual reality, 4K video, artificial intelligence, and their related curricula.

The GeckoView team added APIs for overriding screen size and display DPI, which will enable more UI customization in the future. We also did more work to improve model load times, plus general performance fixes.

Did you know you can see everything that goes into Firefox Reality in the Github? Every bug and commit is available for you to see.

Social

Tons of bug fixes for stability, performance, and fixes of the drawing tool.

See you next week!

Steve FinkType examination in gdb

Sometimes, the exact layout of objects in memory becomes very important. Some situations you may encounter: When overlaying different types as “views” of the same memory location, perhaps via reinterpret_cast, unions, or void*-casting. You want to know where the field in one view lands in another. When examining a struct layout’s packing, to see if […]

Mozilla Addons BlogShare files easily with extensions

WeTransfer offers a simple, extensions-based file transferring solution.

When we want to share digital files, most people think of popular file hosting services like Box or Dropbox, or other common methods such as email and messaging apps. But did you know there are easier—and more privacy-focused—ways to do it with extensions? WeTransfer and Fire File Sender are two intriguing extension options.

WeTransfer allows you to send files up to 2GB in size with a link that expires seven days from upload. It’s really simple to use—just click the toolbar icon and a small pop-up appears inviting you to upload files and copy links for sharing. WeTransfer uses the highest security standards and is compliant with EU privacy laws. Better still, recipients downloading files sent through WeTransfer won’t get bombarded with advertisements; rather, they’ll see beautiful wallpapers picked by the WeTransfer editorial team. If you’re interested in additional eye-pleasing backgrounds, check out WeTransfer Moment.

Fire File Sender allows you to send files up to 4GB each. Once the file is successfully uploaded, a link and a six-digit code is generated for you to share. The link and code will expire 10 minutes after upload or after one download—whichever occurs first. Also, within the 10-minute time frame, you have the ability to stop sharing the file. Fire File Sender uses the browser sidebar for the uploading and downloading of files through Send Anywhere APIs.

Best of all, neither WeTransfer, nor Fire File Sender require an account to use their service. The enhanced anonymity of the file exchange, plus the automatic deletion of files (Dropbox and Google require manual deletion), make these extensions strong choices for privacy-minded folks.

I should also mention Firefox Send, though it’s a web service and not an extension. Firefox Send is Mozilla’s home-grown solution to file sharing. Created by the Mozilla Test Pilot team, Firefox Send allows you to securely share files up to 1GB in size directly from your browser. Any links generated will either expire after one download or 24 hours, whichever comes first. Taking privacy matters even further, files distributed through Firefox Send are encrypted directly in the browser and then uploaded to Mozilla. Mozilla does not have the ability to access the content of the encrypted file.  (The Test Pilot team constantly strives to improve on their project; its development progress can be viewed on GitHub.)

 

The post Share files easily with extensions appeared first on Mozilla Add-ons Blog.

Robert O'CallahanASAN And LSAN Work In rr

AddressSanitizer has worked in rr for a while. I just found that LeakSanitizer wasn't working and landed a fix for that. This means you can record an ASAN build and if there's an ASAN error, or LSAN finds a leak, you can replay it in rr knowing the exact addresses of the data that leaked — along with the usual rr goodness of reverse execution, watchpoints, etc. Well, hopefully. Report an issue if you find more problems.

Interestingly, LSAN doesn't work under gdb, but it does work under rr! LSAN uses the ptrace() API to examine threads when it looks for leaks, and it can't ptrace a thread that gdb is already ptracing (the ptrace design deeply relies on there being only one ptracer per thread). rr uses ptrace too, but when one rr tracee thread tries to ptrace another rr tracee thread, rr emulates the ptrace calls so that they work as if rr wasn't present.

Mozilla Marketing Engineering & Ops BlogUsing Brotli compression to reduce CDN costs

The Snippets Service allows Mozilla to communicate with Firefox users directly by placing a snippet of text and an image on their new tab page. Snippets share exciting news from the Mozilla World, useful tips and tricks based on user activity and sometimes jokes.

To achieve personalized, activity based messaging in a privacy respecting and efficient manner, the service creates a Bundle of Snippets per locale. Bundles are HTML documents that contain all Snippets targeted to a group of users, including their Style-Sheets, images, metadata and the JS decision engine.

The Bundle is transferred to the client where the locally executed decision engine selects a snippet to display. A carefully designed system with multiple levels of caching takes care of the delivery. One layer of caching is a CloudFront CDN.

The problem

During the last months we observed a significant uptake of our CDN costs as Mozilla’s Lifecycle Marketing Team was increasing the number of Snippets for the English language from about 10 to 150.

The Bundle file-size increased from about 200 KiB to more than 4MiB. Given that Firefox requests new Bundles every 4 hours that translated to about 75 TB of transferred data per day or about 2.25 PB (yes, that’s Petabytes!) of data transferred per month, despite the local browser caching.

The solution

Bundles include everything a Snippet needs to be displayed: the targeting rules, the text and the image in a base64 encoded format. First hypothesis was that we could reduce the Bundle size by reducing the image size. We run optipng against all images in the bundle to prove the hypothesis. The images were optimized but the Bundle shrunk for only 100KiB, about 2.5% of the total size.

Second hypothesis was to replace the images with links to images. Since not all Snippets are displayed to all users, we can benefit by not transferring all images to all users. This reduced the Bundle size to 1.1MiB without accounting for the size of the images that will be transferred.

Third hypothesis was to replace GZip with Brotli compression. Brotli is a modern compression algorithm supported by Firefox and all other major browsers as alternative method for HTTP Compression.

Brotli reduced the size of the bundle down to 500KiB, about 25% of the size of the CloudFront GZip mechanism which compressed the bundle to about 2.2MiB.

Since CloudFront does not support on the fly Brotli compression, we prepare and compress the Bundles on the app level before uploading to S3. By adding the correct Content-Encoding headers, the S3 objects are ready to be served by the CDN.

Conclusions

Although all three solutions can reduce the Bundle size, the third provided the best performance to effort ratio and we proceeded with implementation. Next day reports graphed a significant drop on costs marking the project a success. From the original average of 75TB transferred data per day, we dropped down to 15TB. We are going to further improve in the future by moving the images outside the Bundle.

It’s clear that Brotli compression can achieve significantly higher compression rates compared to GZip at the expense of more CPU time. Even though our CDN of choice doesn’t support Brotli, assets can be pre-compressed and uploaded ready for use.

Relevant Links

Kevin BrosnanGeneral steps for building older versions of Firefox for Android

Step 0: Have a current working build environment for building Firefox for Android for a recent checkout of mozilla-central.

Step 1: Figure out when the revision you are interested in was checked in. hg log -r <revision> will give you a date of the checkin.

Step 2: Check the revision history of the Simple Firefox for Android build guide you want to find a revision slightly before the date from step 1. At the bottom of the page “Required Android SDK and NDK versions” use this section as a reference for the next several steps.

Step 3. Install the version of the Android SDK Platform listed on the DevMo page. Via Android Studio’s SDK manager. Tools -> SDK Manager -> SDK Platforms ->  mark the API version you need -> click apply

Step 4. Install the SDK build tools using Android Studio’s SDK manager. Tools -> SDK Manager -> SDK Tools -> mark the SDK build tools version you need -> click apply

Step 5. Get the correct NDK from Google’s archives. Then extract it to where you store your NDKs. $HOME/.mozbuild is the default.

Step 6. Get the Android SDK tools. This can be a real pain as Google does not have links to download this. You will need to craft your own version of the URL. The URL format is https://dl.google.com/android/repository/tools_r<version>-<operating-system>.zip Where version matches the “Android SDK Tools” line from DevMo and operating system is macosx or linux. Example https://dl.google.com/android/repository/tools_r23.0.1-linux.zip

Step 7. Create a copy of the SDK, delete the tools directory and place the folder from the Android SDK Tools download step 6 above in that folder. Example $HOME/.mozbuild/android-sdk-linux-23.0.1/

Step 8. Update your .mozconfig to point to the older NDK and SDK versions
# Build Firefox for Android:
ac_add_options --enable-application=mobile/android
ac_add_options --target=arm-linux-androideabi
# With the following Android SDK and NDK:
ac_add_options --with-android-sdk="/absolute/path/to/android-sdk-linux-23.0.1"
ac_add_options --with-android-ndk="/absolute/path/to/android-ndk-r11c"

Step 9. ./mach build
./mach package
./mach install
./mach run

Mozilla Open Policy & Advocacy BlogBrazilian data protection is strong step forward, action needed on enforcement

Brazil’s newly passed data protection law is a huge step forward in the protection of user privacy. It’s great to see Brazil, long a champion of digital rights, join the ranks of countries with data protection laws on the books. We are concerned, however, about President Temer’s veto of several provisions, including the Data Protection Authority. We urge the President and Brazilian policymakers to swiftly advance new legislation or policies to ensure effective enforcement of the law.

The post Brazilian data protection is strong step forward, action needed on enforcement appeared first on Open Policy & Advocacy.

Mike HoyeTime Dilation


[ https://www.youtube.com/embed/JEpsKnWZrJ8 ]

I riffed on this a bit over at twitter some time ago; this has been sitting in the drafts folder for too long, and it’s incomplete, but I might as well get it out the door. Feel free to suggest additions or corrections if you’re so inclined.

You may have seen this list of latency numbers every programmer should know, and I trust we’ve all seen Grace Hopper’s classic description of a nanosecond at the top of this page, but I thought it might be a bit more accessible to talk about CPU-scale events in human-scale transactional terms. So: if a single CPU cycle on a modern computer was stretched out as long as one of our absurdly tedious human seconds, how long do other computing transactions take?

If a CPU cycle is 1 second long, then:

  • Getting data out of L1 cache is about the same as getting your data out of your wallet; about 3 seconds.
  • At 9 to 10 seconds, getting data from L2 cache is roughly like asking your friend across the table for it.
  • Fetching data from the L3 cache takes a bit longer – it’s roughly as fast as having an Olympic sprinter bring you your data from 400 meters away.
  • If your data is in RAM you can get it in about the time it takes to brew a pot of coffee; this is how long it would take a world-class athlete to run a mile to bring you your data, if they were running backwards.
  • If your data is on an SSD, though, you can have it six to eight days, equivalent to having it delivered from the far side of the continental U.S. by bicycle, about as fast as that has ever been done.
  • In comparison, platter disks are delivering your data by horse-drawn wagon, over the full length of the Oregon Trail. Something like six to twelve months, give or take.
  • Network transactions are interesting – platter disk performance is so poor that fetching data from your ISP’s local cache is often faster than getting it from your platter disks; at two to three months, your data is being delivered to New York from Beijing, via container ship and then truck.
  • In contrast, a packet requested from a server on the far side of an ocean might as well have been requested from the surface of the moon, at the dawn of the space program – about eight years, from the beginning of the Apollo program to Armstrong, Aldrin and Collin’s successful return to earth.
  • If your data is in a VM, things start to get difficult – a virtualized OS reboot takes about the same amount of time as has passed between the Renaissance and now, so you would need to ask Leonardo Da Vinci to secretly encode your information in one of his notebooks, and have Dan Brown somehow decode it for you in the present? I don’t know how reliable that guy is, so I hope you’re using ECC.
  • That’s all if things go well, of course: a network timeout is roughly comparable to the elapsed time between the dawn of the Sumerian Empire and the present day.
  • In the worst case, if a CPU cycle is 1 second, cold booting a racked server takes approximately all of recorded human history, from the earliest Indonesian cave paintings to now.

Firefox NightlyThese Weeks in Firefox: Issue 42

Highlights

  • New Onboarding experience in Firefox 62 currently only as an experiment.
    • The onboarding critters when first starting up Firefox

      Totally adorable onboarding critters (Scientific name: Totes Adorbs Familiaris)

  • The new about:policies helps administrators verify if they have configured policies correctly, learn more about the different policies, and resolve errors.

    • The new about:policies page, showing which policies are enabled by system administrators

      about:policies, coming soon!

  • About:performance UI is currently being updated, currently behind a pref more details in the bug 1477677
    • The new about:performance page showing a table of open tabs ordered by how much they're impacting system resource usage

      The new about:performance will show you what pages are draining your system resources

  • Doug Thayer pushed the ClientStorage work through the finish line! This should improve responsiveness and (maybe) power usage on macOS. This should also allow tab warming to ride to release on macOS!

Project Updates

Add-ons / Web Extensions

Browser Architecture

  • XUL/XBL Replacement Newsletter #6 posted.
  • Browser console is now loaded as a html document.
  • getElementsByAttribute[NS] now works on (chrome) HTML documents.
  • Added document.createXULElement. No namespace funkiness!
  • Working on a plan to either remove broadcaster/observers or support them in HTML.
  • Investigating feasibility of landing rkv as NPOTB so potential consumers can investigate it for suitability to their use cases (bug 1445451).

Lint

  • We are switching most ChromeUtils.import calls to be treated as explicit variable declarations by ESLint. This has the advantage of triggering no-unused-vars more often (especially in jsm files), to find unused imports.
    • This doesn’t work where modules.json lists a file as exporting two symbols (only one of them might be used, so we haven’t weeded them out yet).
    • The better form declarations of const {Foo} = ChromeUtils.import(“resource://foo.jsm”, {}); are already handled according to the variables.

Performance

Policy Engine

  • About:policies page (Bug 1472528) – Kanika Saini
    • Active Policies
      • Policies vary a lot
        • Some are just boolean values, for e.g DisableAppUpdate
        • Some are arrays of objects with keys and values, for e.g. Bookmarks
        • Some are objects which have keys and their values have arrays in a deeper level, for e.g Permissions
    • Documentation

      • Showing the built-in documentation in about:policies

        The documentation is built-in! Alright!

        Showing the schemas for some policies inside the about:policies built-in documentation

        Showing off the schema for some policies

      • Machine-only icon warns the administrator about such policies
      • Each policy row is a collapsible which on click expands to display more information about the policy, for e.g schema for the policy
    • Errors
      • Showing the error interface in about:policies, with some example errors in it.

        When things go wrong with policy management, error messages go here.

      • Error tab is only visible when there is an error
      • Gives a brief of the error relating to the Policy Engine only

Search and Navigation

Address Bar & Search

Places

Test Pilot

  • Side View is a hit!
    • MAU graph:

      Showing a MAU graph of our Test Pilot users

      Side View seems to be pretty popular with our Test Pilot users.

    • Next for Side View: added to Shield queue
  • Screenshots
    • New annotations features shipped! Undo/Redo (Barry) & Text tool (Punam)
    • Current sprint is mostly server-focused:
      • Finishing the last few bugs on new features, minor release later this week
      • Starting work on a redesign with tighter FxA integration & better accessibility
        • Soon: work with Kimberly from accessibility team to add accessibility testing to our Selenium tests
    • Client updates:
      • Bootstrap removal work continues
        • Telemetry API for internal WebExtensions got R+, will be landing soon
      • Adding Barry and Punam as peers on the Firefox Screenshots module

Web Payments

  • Working through final bugs before WebPayments goes through user testing.
  • Prathiksha finished her internship last week. We are very grateful for her contributions!

Hacks.Mozilla.OrgDweb: Building a Resilient Web with WebTorrent

In this series we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source, and open for participation, and share Mozilla’s mission to keep the web open and accessible for all.

The web is healthy when the financial cost of self-expression isn’t a barrier. In this installment of the Dweb series we’ll learn about WebTorrent – an implementation of the BitTorrent protocol that runs in web browsers. This approach to serving files means that websites can scale with as many users as are simultaneously viewing the website – removing the cost of running centralized servers at data centers. The post is written by Feross Aboukhadijeh, the creator of WebTorrent, co-founder of PeerCDN and a prolific NPM module author… 225 modules at last count! –Dietrich Ayala

What is WebTorrent?

WebTorrent is the first torrent client that works in the browser. It’s written completely in JavaScript – the language of the web – and uses WebRTC for true peer-to-peer transport. No browser plugin, extension, or installation is required.

Using open web standards, WebTorrent connects website users together to form a distributed, decentralized browser-to-browser network for efficient file transfer. The more people use a WebTorrent-powered website, the faster and more resilient it becomes.

Screenshot of the WebTorrent player interface

Architecture

The WebTorrent protocol works just like BitTorrent protocol, except it uses WebRTC instead of TCP or uTP as the transport protocol.

In order to support WebRTC’s connection model, we made a few changes to the tracker protocol. Therefore, a browser-based WebTorrent client or “web peer” can only connect to other clients that support WebTorrent/WebRTC.

Once peers are connected, the wire protocol used to communicate is exactly the same as in normal BitTorrent. This should make it easy for existing popular torrent clients like Transmission, and uTorrent to add support for WebTorrent. Vuze already has support for WebTorrent!

Diagram showing the decentralized P2P network of torrents

Getting Started

It only takes a few lines of code to download a torrent in the browser!

To start using WebTorrent, simply include the webtorrent.min.js script on your page. You can download the script from the WebTorrent website or link to the CDN copy.

<script src="webtorrent.min.js"></script>

This provides a WebTorrent function on the window object. There is also an
npm package available.

var client = new WebTorrent()

// Sintel, a free, Creative Commons movie
var torrentId = 'magnet:...' // Real torrent ids are much longer.

var torrent = client.add(torrentId)

torrent.on('ready', () => {
// Torrents can contain many files. Let's use the .mp4 file
var file = torrent.files.find(file => file.name.endsWith('.mp4'))

// Display the file by adding it to the DOM.
// Supports video, audio, image files, and more!
file.appendTo('body')
})

That’s it! Now you’ll see the torrent streaming into a <video width="300" height="150"> tag in the webpage!

Learn more

You can learn more at webtorrent.io, or by asking a question in #webtorrent on Freenode IRC or on Gitter. We’re looking for more people who can answer questions and help people with issues on the GitHub issue tracker. If you’re a friendly, helpful person and want an excuse to dig deeper into the torrent protocol or WebRTC, then this is your chance!

 

 

Tim TaubertBitslicing, an Introduction

Bitslicing (in software) is an implementation strategy enabling fast, constant-time implementations of cryptographic algorithms immune to cache and timing-related side channel attacks.

This post intends to give a brief overview of the general technique, not requiring much of a cryptographic background. It will demonstrate bitslicing a small S-box, talk about multiplexers, LUTs, Boolean functions, and minimal forms.

What is bitslicing?

Matthew Kwan coined the term about 20 years ago after seeing Eli Biham present his paper A Fast New DES Implementation in Software. He later published Reducing the Gate Count of Bitslice DES showing an even faster DES building on Biham’s ideas.

The basic concept is to express a function in terms of single-bit logical operations – AND, XOR, OR, NOT, etc. – as if you were implementing a logic circuit in hardware. These operations are then carried out for multiple instances of the function in parallel, using bitwise operations on a CPU.

In a bitsliced implementation, instead of having a single variable storing a, say, 8-bit number, you have eight variables (slices). The first storing the left-most bit of the number, the next storing the second bit from the left, and so on. The parallelism is bounded only by the target architecture’s register width.

What’s it good for?

Biham applied bitslicing to DES, a cipher designed to be fast in hardware. It uses eight different S-boxes, that were usually implemented as lookup tables. Table lookups in DES however are rather inefficient, since one has to collect six bits from different words, combine them, and afterwards put each of the four resulting bits in a different word.

Speed

In classical implementations, these bit permutations would be implemented with a combination of shifts and masks. In a bitslice representation though, permuting bits really just means using the “right” variables in the next step; this is mere data routing, which is resolved at compile-time, with no cost at runtime.

Additionally, the code is extremely linear so that it usually runs well on heavily pipelined modern CPUs. It tends to have a low risk of pipeline stalls, as it’s unlikely to suffer from branch misprediction, and plenty of opportunities for optimal instruction reordering for efficient scheduling of data accesses.

Parallelization

With a register width of n bits, as long as the bitsliced implementation is no more than n times slower to run a single instance of the cipher, you end up with a net gain in throughput. This only applies to workloads that allow for parallelization. CTR and ECB mode always benefit, CBC and CFB mode only when decrypting.

Constant execution time

Constant-time, secret independent computation is all the rage in modern applied cryptography. Bitslicing is interesting because by using only single-bit logical operations the resulting code is immune to cache and timing-related side channel attacks.

Fully Homomorphic Encryption

The last decade brought great advances in the field of Fully Homomorphic Encryption (FHE), i.e. computation on ciphertexts. If you have a secure crypto scheme and an efficient NAND gate you can use bitslicing to compute arbitrary functions of encrypted data.

Bitslicing a small S-box

Let’s work through a small example to see how one could go about converting arbitrary functions into a bunch of Boolean gates.

Imagine a 3-to-2-bit S-box function, a component found in many symmetric encryption algorithms. Naively, this would be represented by a lookup table with eight entries, e.g. SBOX[0b000] = 0b01, SBOX[0b001] = 0b00, etc.

uint8_t SBOX[] = { 1, 0, 3, 1, 2, 2, 3, 0 };

This AES-inspired S-box interprets three input bits as a polynomial in GF(23) and computes its inverse mod P(x) = x3 + x2 + 1, with 0-1 := 0. The result plus (x2 + 1) is converted back into bits and the MSB is dropped.

You can think of the above S-box’s output as being a function of three Boolean variables, where for instance f(0,0,0) = 0b01. Each output bit can be represented by its own Boolean function, i.e. fL(0,0,0) = 0 and fR(0,0,0) = 1.

LUTs and Multiplexers

If you’ve dealt with FPGAs before you probably know that these do not actually implement Boolean gates, but allow Boolean algebra by programming Look-Up-Tables (LUTs). We’re going to do the reverse and convert our S-box into trees of multiplexers.

Multiplexer is just a fancy word for data selector. A 2-to-1 multiplexer selects one of two input bits. A selector bit decides which of the two inputs will be passed through.

bool mux(bool a, bool b, bool s) {
  return s ? b : a;
}

Here are the LUTs, or rather truth tables, for the Boolean functions fL(a,b,c) and fR(a,b,c):

 abc | SBOX            abc | f_L()         abc | f_R()
-----|------          -----|-------       -----|-------
 000 | 01              000 | 0             000 | 1
 001 | 00              001 | 0             001 | 0
 010 | 11              010 | 1             010 | 1
 011 | 01     --->     011 | 0      +      011 | 1
 100 | 10              100 | 1             100 | 0
 101 | 10              101 | 1             101 | 0
 110 | 11              110 | 1             110 | 1
 111 | 00              111 | 0             111 | 0

The truth table for fL(a,b,c) is (0, 0, 1, 0, 1, 1, 1, 0) or 2Eh. We can also call this the LUT-mask in the context of an FPGA. For each output bit of our S-box we need a 3-to-1 multiplexer, and that in turn can be represented by 2-to-1 multiplexers.

Multiplexers in Software

Let’s take the mux() function from above and make it constant-time. As stated earlier, bitslicing is competitive only through parallelization, so, for demonstration, we’ll use uint8_t arguments to later compute eight S-box lookups in parallel.

uint8_t mux(uint8_t a, uint8_t b, uint8_t s) {
  return (a & ~s) | (b & s);
}

If the n-th bit of s is zero it selects the n-th bit in a, if not it forwards the n-th bit in b. The wider the target architecture’s registers, the bigger the theoretical throughput – but only if the workload can take advantage of the level of parallelization.

A first implementation

The two output bits will be computed separately and then assembled into the final value returned by SBOX(). Each multiplexer in the above diagram is represented by a mux() call. The first four take the LUT-masks 2Eh and B2h as inputs.

The diagram shows Boolean functions that only work with single-bit parameters. We use uint8_t, so instead of 1 we need to use ~0 to get 0b11111111.

uint8_t SBOXL(uint8_t a, uint8_t b, uint8_t c) {
  uint8_t c0 = mux( 0,  0, c);
  uint8_t c1 = mux(~0,  0, c);
  uint8_t c2 = mux(~0, ~0, c);
  uint8_t c3 = mux(~0,  0, c);

  uint8_t b0 = mux(c0, c1, b);
  uint8_t b1 = mux(c2, c3, b);

  return mux(b0, b1, a);
}
uint8_t SBOXR(uint8_t a, uint8_t b, uint8_t c) {
  uint8_t c0 = mux(~0,  0, c);
  uint8_t c1 = mux(~0, ~0, c);
  uint8_t c2 = mux( 0,  0, c);
  uint8_t c3 = mux(~0,  0, c);

  uint8_t b0 = mux(c0, c1, b);
  uint8_t b1 = mux(c2, c3, b);

  return mux(b0, b1, a);
}
void SBOX(uint8_t a, uint8_t b, uint8_t c, uint8_t* l, uint8_t* r) {
  *l = SBOXL(a, b, c);
  *r = SBOXR(a, b, c);
}

That wasn’t too hard. SBOX() is constant-time and immune to cache timing attacks. Not counting the negation of constants (~0) we have 42 gates in total and perform eight lookups in parallel.

Assuming, for simplicity, that a table lookup is just one operation, the bitsliced version is about five times as slow. If we had a workflow that allowed for 64 parallel S-box lookups we could achieve eight times the current throughput by using uint64_t variables.

A better mux() function

mux() currently needs three operations. Here’s another variant using XOR:

uint8_t mux(uint8_t a, uint8_t b, uint8_t s) {
  uint8_t c = a ^ b;
  return (c & s) ^ a;
}

Now there still are three gates, but the new version lends itself often to easier optimization as we might be able to precompute a ^ b and reuse the result.

Simplifying the circuit

Let’s optimize our circuit manually by following these simple rules:

  • mux(a, a, s) reduces to a.
  • Any X AND ~0 will always be X.
  • Anything AND 0 will always be 0.
  • mux() with constant inputs can be reduced.

With the new mux() variant there are a few XOR rules to follow as well:

  • Any X XOR X reduces to 0.
  • Any X XOR 0 reduces to X.
  • Any X XOR ~0 reduces to ~X.

Inline the remaining mux() calls, eliminate common subexpressions, repeat.

void SBOX(uint8_t a, uint8_t b, uint8_t c, uint8_t* l, uint8_t* r) {
  uint8_t na = ~a;
  uint8_t nb = ~b;
  uint8_t nc = ~c;

  uint8_t t0 = nb & a;
  uint8_t t1 = nc & b;
  uint8_t t2 = b | nc;
  uint8_t t3 = na & t2;

  *l = t0 | t1;
  *r = t1 | t3;
}

Using the laws of Boolean algebra and the rules formulated above I’ve reduced the circuit to nine gates (down from 42!). We actually couldn’t simplify it any further.

Next: Circuit Minimization

Finding the minimal form of a Boolean function is an NP-complete problem. Manual optimization is tedious but doable for a tiny S-box such as the example used in this post. It will not be as easy for multiple 6-to-4-bit S-boxes (DES) or an 8-to-8-bit one (AES).

There are simpler and faster ways to build those circuits, and deterministic algorithms to check whether we reached the minimal form. One of those is covered in my next post Bitslicing with Karnaugh maps.

Nick CameronRustfmt 1.0 release candidate

The current version of Rustfmt, 0.99.2, is the first 1.0 release candidate. It is available on nightly and beta (technically 0.99.1 there) channels, and from the 13th September will be available with stable Rust.

1.0 will be a huge milestone for Rustfmt. As part of it's stability guarantees, it's formatting will be frozen (at least until 2.0). That means any sub-optimal formatting still around will be around for a while. So please help test Rustfmt and report any bugs or sub-optimal formatting.

Rustfmt's formatting is specified in RFC 2436. Rustfmt does not reformat comments, string literals, or many macros/macro uses.

To install Rustfmt: rustup component add rustfmt-preview. To run use rustfmt main.rs (replacing main.rs with the file (and submodules) you want to format) or cargo fmt. For more information see the README.

The Mozilla BlogWelcome Amy Keating, our incoming General Counsel

I’m excited to announce that Amy Keating will be joining us in September as Mozilla’s new General Counsel.

Amy will work closely with me to help scale and reinforce our legal capabilities. She will be responsible for all aspects of Mozilla’s legal work including product counseling, commercial contracts, licensing, privacy issues and legal support to the Mozilla Foundation.

“Mozilla’s commitment to innovation and an internet that is open and accessible to all speaks to me at a personal level, and I’ve been drawn to serving this kind of mission throughout my career,” said Amy Keating, Mozilla incoming General Counsel. “I’m grateful for the opportunity to learn from Mozilla’s incredible employees and community and to help promote the principles that make Mozilla a trusted and unique voice in the world.”

Amy joins Mozilla from Twitter, Inc. where she has been Vice President, Legal and Deputy General Counsel. When she joined Twitter in 2012, she was the first lawyer focused on litigation, building out the functions and supporting the company as both the platform and the employee base grew in the U.S. and internationally. Her role expanded over time to include oversight of Twitter’s product counseling, regulatory, privacy, employment legal, global litigation, and law enforcement legal response functions. Prior to Twitter, Amy was part of Google, Inc.’s legal team and began her legal career as an associate at Bingham McCutchen LLP.

From her time at Twitter and prior, Amy brings a wealth of experience and a deep understanding of the product, litigation, regulatory, international, intellectual property and employment legal areas.

Join me in welcoming Amy to Mozilla!

Denelle

The post Welcome Amy Keating, our incoming General Counsel appeared first on The Mozilla Blog.

Mozilla Addons BlogBuilding Extension APIs with Friend of Add-ons Oriol Brufau

Please meet Oriol Brufau, our newest Friend of Add-ons! Oriol is one of 23 volunteer community members who have landed code for the WebExtensions API in Firefox since the technology was first introduced in 2015. You may be familiar with his numerous contributions  if you have set a specific badge text color for your browserAction, highlighted multiple tabs with the tabs.query API, or have seen your extension’s icon display correctly in about:addons.

While our small engineering team doesn’t always have the resources to implement every approved request for new or enhanced WebExtensions APIs, the involvement of community members like Oriol adds considerable depth and breadth to technology that affects millions of users. However, the Firefox code base is large, complex, and full of dependencies. Contributing code to the browser can be difficult even for experienced developers.

As part of celebrating Oriol’s achievements, we asked him to share his experience contributing to the WebExtensions API with the hope that it will be helpful for other developers interested in landing more APIs in Firefox.

When did you first start contributing code to Firefox? When did you start contributing code to WebExtensions APIs?

I had been using Firefox Nightly, reporting bugs and messing with code for some time, but my first code contribution wasn’t until February 2016. This was maybe not the best choice for my first bug. I managed to fix it, though I didn’t have much idea about what the code was doing, and my patch needed some modifications by Jonathan Kew.

For people who want to start contributing, it’s probably a better idea to search Bugzilla for a bug with the ‘good-first-bug’ keyword. (Editor’s note: you can find mentored good-first-bugs for WebExtensions APIs here.)

I started contributing to the WebExtensions API in November 2017, when I learned that legacy extensions would stop working even if I had set the preference to enable legacy extensions in Nightly. Due to the absence of good compatible alternatives to some of my legacy add-ons, I tried to write them myself, but I couldn’t really do what I wanted because some APIs were buggy or lacked various features. Therefore, I started making proposals for new or enhanced APIs, implementing them, and fixing bugs.

What were some of the challenges to building and landing code for the WebExtensions API?

I wasn’t very familiar with WebExtensions APIs, so understanding their implementation was a bit difficult at first. Also, debugging the code can be tricky. Some code runs in the parent process and some in the content one, and the debugger can make Firefox crash.

Initially, I used to forget about testing for Android. Sometimes I had a patch that seemed to work perfectly for Linux, but it couldn’t land because it broke some Android tests. In fact, not being able to run Android tests locally in my PC is a big annoyance.

What resources did you use to overcome those challenges?

I use https://searchfox.org, a source code indexing tool for Firefox, which makes it easy to find the code that I want to modify, and I received some help from mentors in Bugzilla.

Reading the documentation helps but it’s not very detailed. I usually need to look at the Firefox or Chromium code in order to answer my questions.

Did any of your past experiences contributing code to Firefox help you create and land the WebExtensions APIs?

Yes. Despite being unfamiliar with WebExtensions APIs at first, I had a considerable experience with searching code using Searchfox, using ‘./mach build fast’ to recompile only the frontend, running tests, managing my patches with Mercurial, and getting them reviewed and landed.

Also, I already had commit access level 1, which allows me to run tests in the try servers. That’s helpful for ensuring everything works on Android.

What advice would you give people who want to build and land WebExtensions APIs in Firefox?

1. I didn’t find explanations for how the code is organized, so I would first summarize it.

The code is mainly distributed into three different folders:

  • /browser/components/extensions/:
  • /mobile/android/components/extensions/
  • /toolkit/components/extensions/

The ‘browser’ folder contains the code specific to Firefox desktop, the ‘android’ is specific to Firefox for Android, and ‘toolkit’ contains code shared for both.

Some APIs are defined directly in ‘toolkit’, and some are defined differently in ‘browser’ and ‘android’ but they can still share some code from ‘tookit’.

2. APIs are defined using JSON schemas. They are located in the ‘schemas’ subdirectory of the folders above, and describe the API properties and methods, determine which kind of parameters are accepted, etc.

3. The actual logic of the APIs is written in JavaScript files in the ‘parent’ and ‘child’ subdirectories, mostly the former.

Is there anything else you would like to add?

The existing Webextension APIs are now more mature, useful and reliable than they were when support for legacy extensions was dropped. It’s great that promising new APIs are on the way!

Thanks, Oriol! It is a pleasure to have you in the community and and we wish you all the best in  your future endeavors.

If you are interested in contributing to the WebExtensions API and are new to Firefox’s infrastructure, we recommend that you onboard to the Firefox codebase and then land a patch for a good-first-bug. If you are more familiar with Firefox infrastructure, you may want to implement one of the approved WebExtensions API requests.

For more opportunities to contribute to the add-ons ecosystem, please visit our Contribution wiki.

The post Building Extension APIs with Friend of Add-ons Oriol Brufau appeared first on Mozilla Add-ons Blog.

Mozilla Localization (L10N)L10N Report: August Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.

Welcome!

After a quick pause in July, your primary source of localization information at Mozilla is back!

New content and projects

What’s new or coming up in Firefox desktop

As localization drivers, we’re currently working on rethinking and improving the experience of multilingual users in Firefox. While this is a project that will span through several releases of Firefox, the first part of this work already landed in Nightly (Firefox 63): it’s a language switcher in Preferences, hidden behind the intl.multilingual.enabled preference, that currently allows to switch to another language already installed on the system (via language packs).

The next step will be to allow installing a language pack directly from Preferences (for the release version), and install dictionaries when user chooses to add a new language. For that reason, we’re creating a list of dictionaries for each locale. For more details, and discover how you can help, read this thread on dev-l10n.

We’re also working on building a list of native language names to use in the language switcher; once again, check dev-l10n for more info.

Quite a few strings landed in the past weeks for Nightly:

  • Pages for certificate errors have a new look. To test them, you currently need to change the setting browser.security.newcerterrorpage.enabled to true in about:config. The testing instructions available in our documentation remain valid.
  • There’s a whole new section dedicated to Content blocking in preferences, enabled by default in Nightly.

https://screenshotscdn.firefoxusercontent.com/images/765711cf-32e5-4be6-8239-e8bb81d2f8a6.png

What’s new or coming up in mobile

It’s summer time in the western hemisphere, which means many projects (and people!) are taking a break – which also means not many strings are expected to land in mobile land during this period.

One notable thing is that Firefox iOS v13 was just released, and Marathi is a new locale this time around. Congratulations to the team.

On Firefox for Android front, Bosnian (bs), Occitan (oc) and Triqui (trs) are new locales that shipped with on current release version, v61. And we just added English from Canada (en-CA) and Ligurian (lij) to our Nightly v63 multi-locale build, which is available through the Google Play Store. Congratulations to everyone!

Other than that, most mobile projects are on a bit of a hiatus for the rest of the month. However, do expect some new and exciting projects to come in the pipeline over the course of the next few weeks. Stay tuned for more information!

What’s new or coming up in web projects

AMO

About two weeks ago, over 160 sets of curated add-on titles and descriptions were landed in Pontoon. Once localized, they will be included in a Shield Study to be launched on August 13. The study will run for about 2 months. This is probably the largest and longest study the AMO team has conducted.

The current Disco Pane (about:addons) lists curated extensions and themes which are manually programmed. TAAR (Telemetry Aware Add-on Recommender) is a new machine-learning extension discovery system that makes personalized recommendations based on information available in Firefox standard Telemetry. Based on TAAR’s potential to enhance content discovery by surfacing more diversified and personalized recommendations, the team wants to integrate TAAR as a product feature of Disco Pane.  It’s called “Disco-TAAR”.

The localized titles and description will increase users’ likelihood to install and install more than one. To be part of this study, you need to make sure your locale has completed at least 80% of the AMO strings by August 12.

Common Voice

Like many of you, the team is taking a summer break. However, when they come back, they promise to introduce a new home page at the beginning of next month. There should be no localization impact.

There are three ways to contributing to this project:

  1. Web part (through Pontoon)
  2. Sentence collection
  3. Recording

We now have 70 locales showing interest in the project. Many have reached 100% completion or close to it. Congratulations to reaching the first milestones. However, your work shouldn’t stop here. The sentence collection is a major challenge that all the teams face before the fun recording part can begin. Ruben Martin from the Open Innovation team addresses the challenges in this blog post. If you want to learn more about Common Voice project, sign up to Discourse where lots of discussions take place.

What’s new or coming up in Foundation projects

August fundraising emails will be sent to English audience only, the team realizes a lot of people, especially Europeans are gone on holidays and won’t be reading emails. A lot of localizers should be away as well, so they decided it was best to skip this email and focus on September fundraiser.

The Internet Health Report team has started working on next year’s report and is planning to send a localized email in French, German and Spanish to collect examples of projects that improve the health of the internet, links to great new research studies or ideas for topics they should include in the upcoming report.

As for the localized campaign work, it is slowing down in August for several reasons, one of them being an ongoing process to hire two new team members to expand the advocacy campaigns in Europe: a campaign manager, and a partnership organizer. If you know potential candidates, that would be great if you could forward them these offers!

That being said, you can expect some movement on the Copyright campaign towards the end of the month as the next vote is currently scheduled on September 12th.

What’s new or coming up in Pontoon

File priorities and deadlines

The most critical piece of information coming from the tags feature is file priority. It used to be pretty hard to discover, because it was only available in the rarely used Tags tab. We fixed that by exposing file priority in the Resources tab of the Localization dashboard. Similarly, we also added a deadline column to the localization dashboard.

Read-only locales

Pontoon introduced the ability to enable locales for projects in read-only mode. They act the same as regular (read-write) locales, except that users cannot make any edits through Pontoon (by submitting or reviewing translations or suggestions, uploading files or performing batch actions).

That allows us to access translations from locales that do not use Pontoon for localization of some of their projects, which brings a handful of benefits. For example, dashboards and the API will now present full project status across all locales, all Mozilla translations will be accessible in the Locales tab (which is not the case for localizations like the Italian Firefox at the moment) and the Translation Memory of locales currently not available in Pontoon will improve.

Expect more details about this feature as soon as we start enabling read-only locales.

Translation display improvements

Thanks to a patch by Vishal Sharma, translations status in the History tab is now also shown to unauthenticated users and non-Translators. We’ve also removed rejected suggestions from the string list and the editor.

Events

  • The Jakarta l10n event has taken place last weekend with the local community. Some of the things we worked on include: localizing Rocket browser in Javanese and Sundanese, localizing corresponding SUMO articles, refining style guides… and much more!
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Accomplishments

Common Voice

Many communities have made significant progress in the third step: donating their voices. In July, recording were made in the following languages and more.

  • 33 hours English
  • 60 hours Catalan
  • 30 hours Mandarin
  • 24 hours Kabyle
  • 18 hours French
  • 14 hours German
  • 9 other languages at < 10 hours

For the complete list of all the languages, check the language status dashboard.

Useful Links

Questions? Want to get involved?

 

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

This Week In RustThis Week in Rust 247

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is macro_railroad, a library to create neat syntax diagrams for macro_rules! declarative macros. Thanks to kornel for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

102 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Online
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Fearless concurrency includes fearless refactoring.

cuviper at rust-users.

Thanks to Jules Kerssemakers for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Robert O'CallahanDiagnosing A Weak Memory Ordering Bug

For the first time in my life I tracked a real bug's root cause to incorrect usage of weak memory orderings. Until now weak memory bugs were something I knew about but had subconciously felt were only relevant to wizards coding on big iron, partly because until recently I've spent most of my career using desktop x86 machines.

Under heavy load a Pernosco service would assert in Rust's std::thread::Thread::unpark() with the error "inconsistent state in unpark". Inspecting the code led to the disturbing conclusion that the only way to trigger this assertion was memory corruption; the value of self.inner.state should always be between 0 and 2 inclusive, and if so then we shouldn't be able to reach the panic. The problem was nondeterministic but I was able to extract a test workload that reproduced the bug every few minutes. I tried recording it in rr chaos mode but was unable to reproduce it there (which is not surprising in hindsight since rr imposes sequential consistency).

With a custom panic handler I was able to suspend the process in the panic handler and attach gdb to inspect the state. Everything looked fine; in particular the value of self.inner.state was PARKED so we should not have reached the panic. I disassembled unpark() and decided I'd like to see the values of registers in unpark() to try to determine why we took the panic path, in particular the value of self.inner (a pointer) loaded into RCX and the value of self.inner.state loaded into RAX. Calling into the panic handler wiped those registers, so I manually edited the binary to replace the first instruction of the panic handler with UD2 to trigger an immediate core-dump before registers were modified.

The core-dump showed that RCX pointed to some random memory and was not equal to self.inner, even though we had clearly just loaded it from there! The value of state in RAX was loaded correctly via RCX, but was garbage because we were loading from the wrong address. At this point I formed the theory the issue was a low-level data race, possibly involving relaxed memory orderings — particularly because the call to unpark() came from the Crossbeam implementation of Michael-Scott lock-free queues. I inspected the code and didn't see an obvious memory ordering bug, but I also looked at the commit log for Crossbeam and found that a couple of memory ordering bugs had been fixed a long time ago; we were stuck on version 0.2 while the released version is 0.4. Upgrading Crossbeam indeed fixed our bug.

Observation #1: stick to sequential consistency unless you really need the performance edge of weaker orderings.

Observation #2: stick to sequential consistency unless you are really, really smart and have really really smart people checking your work.

Observation #3: it would be really great to have user-friendly tools to verify the correctness of unsafe, weak-memory-dependent code like Crossbeam's.

Observation #4: we need a better way of detecting when dependent crates have known subtle correctness bugs like this (security bugs too). It would be cool if the crates.io registry knew about deprecated crate versions and cargo build warned about them.

Firefox NightlySymantec Distrust in Firefox Nightly 63

As of today, TLS certificates issued by Symantec are distrusted in Firefox Nightly.

You can learn more about what this change means for websites and our release schedule for that change in our Update on the Distrust of Symantec TLS Certificates post published last July by the Mozilla security team.

The Symantec distrust is already effective in Chrome Canary which means that visitors to a web site with a Symantec certificate which was not replaced now get a warning page:

(left is Chrome Canary, right is Firefox Nightly)

We strongly encourage website operators to replace their distrusted Symantec certificate as soon as possible before this change hits the Firefox 63 release planned for October 23.

If you are a Firefox Nightly user, you can also get involved and help this transition by contacting the support channels of these websites to warn them about this change!

Mike HoyeLicensing Edgecases

While I’m not a lawyer – and I’m definitely not your lawyer – licensing questions are on my plate these days. As I’ve been digging into one, I’ve come across what looks like a strange edge case in GPL licensing compliance that I’ve been trying to understand. Unfortunately it looks like it’s one of those Affero-style, unforeseen edge cases that (as far as I can find…) nobody’s tested legally yet.

I spent some time trying to understand how the definition of “linking” applies in projects where, say, different parts of the codebase use disparate, potentially conflicting open source licenses, but all the code is interpreted. I’m relatively new to this area, but generally speaking outside of copying and pasting, “linking” appears to be the critical threshold for whether or not the obligations imposed by the GPL kick in and I don’t understand what that means for, say, Javascript or Python.

I suppose I shouldn’t be surprised by this, but it’s strange to me how completely the GPL seems to be anchored in early Unix architectural conventions. Per the GPL FAQ, unless we’re talking about libraries “designed for the interpreter”, interpreted code is basically data. Using libraries counts as linking, but in the eyes of the GPL any amount of interpreted code is just a big, complicated config file that tells the interpreter how to run.

At a glance this seems reasonable but it seems like a pretty strange position for the FSF to take, particularly given how much code in the world is interpreted, at some level, by something. And honestly: what’s an interpreter?

The text of the license and the interpretation proposed in the FAQ both suggest that as long as all the information that a program relies on to run is contained in the input stream of an interpreter, the GPL – and if their argument sticks, other open source licenses – simply… doesn’t apply. And I can’t find any other major free or open-source licenses that address this question at all.

It just seems like such a weird place for an oversight. And given the often-adversarial nature of these discussions, given the stakes, there’s no way I’m the only person who’s ever noticed this. You have to suspect that somewhere in the world some jackass with a very expensive briefcase has an untested legal brief warmed up and ready to go arguing that a CPU’s microcode is an “interpreter” and therefore the GPL is functionally meaningless.

Whatever your preferred license of choice, that really doesn’t seem like a place we want to end up; while this interpretation may be technically correct it’s also very-obviously a bad-faith interpretation of both the intent of the GPL and that of the authors in choosing it.

The position I’ve taken at work is that “are we technically allowed to do this” is a much, much less important question than “are we acting, and seen to be acting, as good citizens of the larger Open Source community”. So while the strict legalities might be blurry, seeing the right thing to do is simple: we treat the integration of interpreted code and codebases the same way we’d treat C/C++ linking, respecting the author’s intent and the spirit of the license.

Still, it seems like something the next generation of free and open-source software licenses should explicitly address.

Shing LyuChatting with your website visitors through Chatra

When I started the blog, I didn’t add a message board below each article because I don’t have the time to deal with spam. Due to broken windows theory, if I leave the spam unattended my blog will soon become a landfill for spammers. But nowadays many e-commerce site or brand sites have a live chatting box, which will solve my problem because I can simply ignore spam, while interested readers can ask questions and provide feedbacks easily. That’s why when my sponsor, Chatra.io, approached me with their great tool, I fell in love with it right away and must share it with everyone.

How it works

First, signup for a free account here, and you’ll be logged into a clean and modern chat interface.

first login page

You’ll get a JavaScript widget snippet (in the “Set up & customize” page or email), which you can easily place onto your site (even if you don’t have a backend, like this site). A chat button will immediately appear on your site. Your visitor can now send you messages, and you can choose to reply them right away or followup later using their web dashboard, desktop or mobile app.

chat box

What I love about Chatra

Easy setup and clean UI

As you can see, the setup is simply pasting a block of code into your blog template (or use their app or plugin for your platform), and it works right away. The chat interface is modern and clean, you can “get it” within no time if you ever used any chat app.

Considerations for bloggers who can’t be online all day

You might wonder, “I don’t have an army of customer service agents, how can I keep everyone happy with only myself replying messages?”. But Chatra already considered that for you with messenger mode, which can receive messages 24/7 even if you are offline. A bot will automatically reply to your visitor and ask for their contact details, so you can follow up later with an email. Every live or missed message can be configured to be sent to your email, so you can check them in batch after a while. Also messaging history are preserved even if the visitor left and come back later, so you get the context of what they were saying. It’s also important to to set expectations for your visitor, to let them know you are working alone and can’t reply super fast. That brings us to the next point: customizable welcome messages and prompts.

Customizable and programmable

Almost everything in Chatra is customizable. From the welcome message, chat button text, to the automatic reply content. So instead of saying “We are a big team and we’ll reply in 10 mins, guaranteed!”, you can instead say something along the line of “Hi, I’m running this site alone and I’d love to hear from you. I’ll get back to you within days”. Besides customizing the look and feel and tone of speech, you can also setup triggers that automatically initiate a chat when criteria meet. For example we can send a automated message when a visitor reads the article for more then 1 minute. Of course you can further customize the experience using the developer API.

trigger setup page

Out-of-the-box Google Analytics integration

One thing I really care about is understanding how my visitors interact with the site, and how I can optimize the content and UX to further engage them. I did that through Google Analytic. Much to my amazement, Chatra detected my Google analytics configuration and automatically send relevant events to my Google Analytic tracking, without me even setting up anything. I can directly create goals based on the events and track the conversion funnel leading to a chat.

Pricing and features

Chatra has a free and a paid plan, and also a 2-week trial period that allows you to test everything before paying. The number of registered agents, connected websites and concurrent chats is unlimited in all plans. The free plan has basic features, including mobile apps, Google Analytics integration, some API options, etc., and allows 1 agent to be online at a time, which is sufficient enough for a one-person website like mine. But you can have several agents taking turns chatting: when one goes offline, another connects. And even if the online spot is already taken, other agents can still access the dashboard and read chats.

The paid plan starts at $15 per month and gives you access to all features, including automatic triggers and visitors online list, saved replies, typing insights, visitor information, integration with services like Zapier, Slack, Help Scout and more, and allows as many agents online as paid for. Agents on the paid plan can also take turns chatting, so there’s no need to pay for all of them.

Conclusion

All in all, Chatra is a nice tool to further engage your visitors. The free plan is generous enough for most small scale websites. In case you scales up in the future, their paid plan is affordable and pays for itself after a few successful sales. So if you want an easy and convenient way to chat with your visitors, gain feedback and have more insights into your users, you should give Chatra a try with this link now.

Mozilla Security BlogTLS 1.3 Published: in Firefox Today

On friday the IETF published TLS 1.3 as RFC 8446. It’s already shipping in Firefox and you can use it today. This version of TLS incorporates significant improvements in both security and speed.

Transport Layer Security (TLS) is the protocol that powers every secure transaction on the Web. The version of TLS in widest use, TLS 1.2, is ten years old this month and hasn’t really changed that much from its roots in the Secure Sockets Layer (SSL) protocol, designed back in the mid-1990s. Despite the minor number version bump, this isn’t the minor revision it appears to be. TLS 1.3 is a major revision that represents more than 20 years of experience with communication security protocols, and four years of careful work from the standards, security, implementation, and research communities (see Nick Sullivan’s great post for the cool details).

Security

TLS 1.3 incorporates a number of important security improvements.

First, it improves user privacy. In previous versions of TLS, the entire handshake was in the clear which leaked a lot of information, including both the client and server’s identities. In addition, many network middleboxes used this information to enforce network policies and failed if the information wasn’t where they expected it.  This can lead to breakage when new protocol features are introduced. TLS 1.3 encrypts most of the handshake, which provides better privacy and also gives us more freedom to evolve the protocol in the future.

Second, TLS 1.3 removes a lot of outdated cryptography. TLS 1.2 included a pretty wide variety of cryptographic algorithms (RSA key exchange, 3DES, static Diffie-Hellman) and this was the cause of real attacks such as FREAK, Logjam, and Sweet32. TLS 1.3 instead focuses on a small number of well understood primitives (Elliptic Curve Diffie-Hellman key establishment, AEAD ciphers, HKDF).

Finally, TLS 1.3 is designed in cooperation with the academic security community and has benefitted from an extraordinary level of review and analysis.  This included formal verification of the security properties by multiple independent groups; the TLS 1.3 RFC cites 14 separate papers analyzing the security of various aspects of the protocol.

Speed

While computers have gotten much faster, the time data takes to get between two network endpoints is limited by the speed of light and so round-trip time is a limiting factor on protocol performance. TLS 1.3’s basic handshake takes one round-trip (down from two in TLS 1.2) and TLS 1.3 incorporates a “zero round-trip” mode in which the client can send data to the server in its first set of network packets. Put together, this means faster web page loading.

What Now?

TLS 1.3 is already widely deployed: both Firefox and Chrome have fielded “draft” versions. Firefox 61 is already shipping draft-28, which is essentially the same as the final published version (just with a different version number). We expect to ship the final version in Firefox 63, scheduled for October 2018. Cloudflare, Google, and Facebook are running it on their servers today. Our telemetry shows that around 5% of Firefox connections are TLS 1.3. Cloudflare reports similar numbers, and Facebook reports that an astounding 50+% of their traffic is already TLS 1.3!

TLS 1.3 was a big effort with a huge number of contributors., and it’s great to see it finalized. With the publication of the TLS 1.3 RFC we expect to see further deployments from other browsers, servers and toolkits, all of which makes the Internet more secure for everyone.

 

The post TLS 1.3 Published: in Firefox Today appeared first on Mozilla Security Blog.

Firefox Test PilotSend: Going Bigger

Send encrypts your files in the browser. This is good for your privacy because it means only you and the people you share the key with can decrypt it. For me, as a software engineer, the challenge with doing it this way is the limited API set available in the browser to “go full circle”. There’s a few things that make it a difficult problem.

The biggest limitation on Send today is the size of the file. This is because we load the entire thing into memory and encrypt it all at once. It’s a simple and effective way to handle small files but it makes large files prone to failure from running out of memory. What size of file is too big also varies by device. We’d like everyone to be able to send large files securely regardless of what device they use. So how can we do it?

The first challenge is to not load and encrypt the file all at once. RFC 8188 specifies a standard for an encrypted content encoding over HTTP that is designed for streaming. This ensures we won’t run out of memory during encryption and decryption by breaking the file into smaller chunks. Implementing the RFC as a Stream give us a nice way to represent our encrypted content.

With a stream instead of a Blob we run into another challenge when it’s time to upload. Streams are not fully supported by the fetch API in all the browsers we want to support yet, including Firefox. We can work around this though, with WebSockets.

Now we’re able to encrypt, upload, download, and decrypt large files without using too much memory. Unfortunately, there’s one more problem to face before we’re able to save a file. There’s no easy way to download a stream from javascript to the local filesystem. The standard way to download data from memory as a file is with createObjectURL, which needs a blob. To stream the decrypted data as a download requires a trip through a ServiceWorker. StreamSaver.js is a nice implementation of the technique. Here again we run into browser support as a limiting factor. There isn’t a work around that doesn’t require having the whole file in memory, which is another case of our original problem. But, streams will be stable in Firefox soon so we’ll be able to support large files as soon as they’re available.

In the end, it’s quite complicated to do end-to-end encryption of large files in the browser compared to small ones, but it is possible. It’s one of many improvements we’re working on for Send this summer that we’re excited about.

As always, you’re welcome to join us on GitHub, and thank you to everyone who’s contributed so far.


Send: Going Bigger was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Princi VershwalVector Tile Support for OpenStreetMap’s iD Editor

Protocolbuffer Binary Format(.pbf) and Mapbox Vector Tiles(.mvt) are two popular formats for sharing map data. Prior to this GSoC project, the iD editor in OSM supported GPX data. GPX is an XML schema designed as a common GPS data format for software applications. It can be used to describe waypoints, tracks, and routes.

The main objective of the project was to add support for vector tile data to iD. MVT and PBF contain data of a particular tile. These files contain data in Protocolbuffer binary format and can have various sets of data like name of cities, or train stations etc. This data can be in the form of points, lines or polygons. A vector tile looks something like this :

The goal is to draw the data of these tile on iD and it should show up on the screen like this :

For implementing the feature the following steps were followed:

  1. Creating a new layer : A new mvt layer is created that would accept a pbf/mvt file. d3_request library is used to read the data in arraybuffer format.
  2. Converting data to GeoJSON : The arraybuffer data is converted to GeoJSON format before passing to the drawMvt function.
    For converting vector tile data to GeoJSON data, Mapbox provides with two libraries:
    1. vt2geojson
    2. vector-tile-js
    vt2geojson is great for changing vector tiles to GeoJSON from remote URLs or local system files but it works with Node.js only.
    For iD we have used mapbox’s vector-tile-js, it read Mapbox Vector Tiles and allows access to the layers and features, these features can be further converted to GeoJSON.
  3. MVT drawing : This GeoJSON data is pass directly to the D3 draw functions which renders the data. (iD uses D3 for all of our drawing already)

All the work related to the above steps is here.

4. Next step was writing the tests for the above code. Tests for the code are here.

Performance Testing

  1. Choosing data : The data which was used to create the vector tiles for testing is this : https://data.cityofnewyork.us/Environment/2015-Street-Tree-Census-Tree-Data/pi5s-9p35/data
    It is a dense data consisting of only points.
  2. Creating MVTs : Vector tiles were created using the above data using a tool called tippecanoe. Mapbox’s tippecanoe is used to build vector tilesets from large (or small) collections of GeoJSON, Geobuf, or CSV features.
  3. Tippecanoe converts GeoJSON data to mbtiles format, these files contain data for more than one tile. Mapbox/mbview was used to view these tiles in localhost and extract individual tiles from the network tab.

4. This URL when passed to iD draws the vector tile like this :

URL used : http://preview.ideditor.com/master/#background=Bing&disable_features=boundaries&map=9.00/39.7225/-74.0153&mvt=https://a.tiles.mapbox.com/v4/mapbox.mapbox-terrain-v2,mapbox.mapbox-streets-v7/12/1207/1541.vector.pbf?access_token= ‘pk.0000.1111’
# replace value with your mapbox public access token

Some More Interesting Stuff

There is much more that can be done with vector tiles. One thing is better styling of the drawings. A very next step is to provide different colors to different layers of the tile data.

For more discussion, you can follow here.
My earlier blogs can be found here.

Niko MatsakisNever patterns, exhaustive matching, and uninhabited types (oh my!)

One of the long-standing issues that we’ve been wrestling with in Rust is how to integrate the concept of an “uninhabited type” – that is, a type which has no values at all. Uninhabited types are useful to represent the “result” of some computation you know will never execute – for example, if you have to define an error type for some computation, but this particular computation can never fail, you might use an uninhabited type.

RFC 1216 introduced ! as the sort of “canonical” uninhabited type in Rust, but actually one can readily make an uninhabited type of your very own just by declared an enum with no variants (e.g., enum Void { }). Since such an enum can never be instantiated, the type cannot have any values. Done.

However, ever since the introduction of !, we’ve wrestled with some of its implications, particularly around exhaustiveness checking – that is, the checks the compiler does to ensure that when you write a match, you have covered every possibility. As we’ll see a bit later, there are some annoying tensions – particularly between the needs of “safe” and “unsafe” code – that are tricky to resolve.

Recently, though, Ralf Jung and I were having a chat and we came up with an interesting idea I wanted to write about. This idea offers a possibility for a “third way” that lets us resolve some of these tensions, I believe.

The idea: ! patterns

Traditionally, when one has an uninhabited type, one “matches against it” by not writing any patterns at all. So, for example, consider the enum Void { } case I had talked about. Today in Rust you can match against such an enum with an empty match statement:

enum Void { }
fn foo(v: Void) {
  match v { }
}

In effect, this match serves as a kind of assertion. You are saying “because v can never be instantiated, foo could never actually be called, and therefore – when I match against it – this match must be dead code”. Since the match is dead code, you don’t need to give any match arms: there is nowhere for execution to flow.

The funny thing is that you made this assertion – that the match is dead code – by not writing anything at all. We’ll see later that this can be problematic around unsafe code. The idea that Ralf and I had was to introduce a new kind of pattern, a ! pattern (pronounced a “never” pattern). This ! pattern matches against any enum with no variants – it is an explicit way to talk about impossible cases. Note that the ! pattern can be used with the ! type, but it can also be used with other types, like Void.

Now we can consider the match v { } above as a kind of shorthand for a use of the ! pattern:

fn foo(v: Void) {
  match v {
    !
  }
}

Note that since ! explicitly represents an unreachable pattern, we don’t need to give a “body” to the match arm either.

We can use ! to cover more complex cases as well. Consider something like a Result that uses Void as the error case. If we want, we can use the ! pattern to explicitly say that the Err case is impossible:

fn foo(v: Result<String, Void>) {
  match v {
    Ok(s) => ...,
    Err(!),
  }
}

Same for matching a “reference to nothing”:

fn foo(v: &!) {
  match v {
    &!,
  }
}

Auto-never transformation

As I noted initially, the Rust compiler currently accepts “empty match” statements when dealing with uninhabited types. So clearly the use of the ! pattern cannot be mandatory – and anyway that would be unergonomic. The idea is that before we check exhaustiveness and so forth we have an “auto-never” step that automatically adds ! patterns into your match as needed.

There are two ways you can be missing cases:

  • If you are matching against an enum, you might cover some of the enum variants but not all. e.g., match foo { Ok(_) => ... } is missing the Err case.
  • If you are matching against other kinds of values, you might be missing an arm altogether. This occurs most often with an empty match like match v { }.

The idea is that – when you omit a case – the compiler will attempt to insert ! patterns to cover that case. In effect, to try and prove on your behalf that this case is impossible. If that fails, you’ll get an error.

The auto-never rules that I would initially propose are as follows. The idea is that we define the auto-never rules based on the type that is being matched:

  • When matching a tuple of struct (a “product type”), we will “auto-never” all of the fields.
    • So e.g. if matching a (!, !) tuple, we would auto-never a (!, !) pattern.
    • But if matching a (u32, !) tuple, auto-never would fail. You would have to explicit write (_, !) as a pattern – we’ll cover this case when we talk about unsafe code below.
  • When matching a reference is uninhabited, we will generate a & pattern and auto-never the referent.
    • So e.g. if matching a &!, we would generate a &! pattern.
    • But there will be a lint for this case that fires “around unsafe code”, as we discuss below.
  • When matching an enum, then the “auto-never” would add all missing variants to that enum and then recursively auto-never those variants’ arguments.
    • e.g., if you write match x { None => .. .} where x: Option<T>, then we will attempt to insert Some(P) where the pattern P is the result of “auto-nevering” the type T.

Note that these rules compose. So for example if you are matching a value of type &(&!, &&Void), we would “auto-never” a pattern like &(&!, &&!).

Implications for safe code

One of the main use cases for uninhabited types like ! is to be able to write generic code that works with Result but have that Result be optimized away when errors are impossible. So the generic code might have a Result<String, E>, but when E happens to be !, that is represented in memory the same as Stringand the compiler can see that anything working with Err variants must be dead-code.

Similarly, when you get a result from such a generic function and you know that E is !, you should be able to painlessly ‘unwrap’ the result. So if I have a value result of type Result<String, !>, I would like to be able to use a let to extract the String:

let result: Result<String, !> = ...;
let Ok(value) = result;

and extract the Ok value v. Similarly, I might like to extract a reference to the inner value as well, doing something like this:

let result: Result<String, !> = ...;
let Ok(value) = &result;
// Here, `value: &String`.

or – equivalently – by using the as_ref method

let result: Result<String, !> = ...;
let Ok(value) = result.as_ref();
// Here, `value: &String`.

All of these cases should work out just fine under this proposal. The auto-never transformation would effectively add Err(!) or Err(&!) patterns – so the final example would be equivalent to:

let value = match result.as_ref() {
  Ok(v) => v,
  Err(&!),
};

Unsafe code and access-based models

Around safe code, the idea of ! patterns and auto-never don’t seem that useful: it’s maybe just an interesting way to make it a bit more explicit what is happening. Where they really start to shine, however, is when you start thinking carefully about unsafe code – and in particular when we think about how matches interact with access-based models of undefined behavior.

What data does a match “access”?

While the details of our model around unsafe code are still being worked out (in part by this post!), there is a general consensus that we want an “access-based” model. For more background on this, see Ralf’s lovely recent blog post on Stacked Borrows, and in particular the first section of it. In general, in an access-based model, the user asserts that data is valid by accessing it – and in particular, they need not access all of it.

So how do access-based models relate to matches? The Rust match is a very powerful construct that can do a lot of things! For example, it can extract fields from structs and tuples:

let x = (22, 44);
match x {
  (v, _) => ..., // reads the `x.0` field
  (_, w) => ..., // reads the `x.1` field
}

It can test which enum variant you have:

let x = Some(22);
match x {
  Some(_) => ...,
  None => ...,
}

And it can dereference a reference and read the data that it points at:

let x = &22;
match x {
  &w => ..., // Equivalent to `let w = *x;`
}

So how do we decide which data a match looks at? The idea is that you should be able to figure that out by looking at the patterns in the match arms and seeing what data they touch:

  • If you have a pattern with an enum variant like Some(_), then it must access the discriminant of the enum being matched.
  • If you have a &-pattern, then it must dereference the reference being matched.
  • If you have a binding, then it must copy out the data that is bound (e.g., the v in (v, _)).

This seems obvious enough. But what about when dealing with an uninhabited type? If I have match x { }, there are no arms at all, so what data does that access?

The key here is to think about the matches after the auto-never transformation has been done. In that case, we will never have an “empty match”, but rather a ! pattern – possibly wrapped in some other patterns. Just like any other enum pattern, this ! pattern is logically a kind of “discriminant read” – but in this case we are reading from a discriminant that cannot exist (and hence we can conclude the code is dead).

So, for example, we had a “reference-to-never” situation, like so:

let x: &! = ...;
match x { }

then this would be desugared into

let x: &! = ...;
match x { &! }

Looking at this elaborated form, the presence of the & pattern makes it clear that the match will access *x, and hence that the reference x must be valid (or else we have UB) – and since no valid reference to ! can exist, we can conclude that this match is dead code.

Devil is in the details

Now that we’ve introduced the idea of unsafe code and so forth, there are two particular interactions between the auto-never rules and unsafe code that I want to revisit:

  • Uninitialized memory, which explains why – when we auto-never a tuple type – we require all fields of the tuple to have uninhabited type, instead of just one.
  • References, which require some special care. In the auto-never rules as I proposed them earlier, we used a lint to try and thread the needle here.

Auto-never of tuple types and uninitialized memory

In the auto-never rules, I wrote the following:

  • When matching a tuple of struct (a “product type”), we will “auto-never” all of the fields.
    • So e.g. if matching a (!, !) tuple, we would auto-never a (!, !) pattern.
    • But if matching a (u32, !) tuple, auto-never would fail. You would have to explicit write (_, !) as a pattern – we’ll cover this case when we talk about unsafe code below.

You might think that this is stricter than necessary. After all, you can’t possibly construct an instance of a tuple type like (u32, !), since you can’t produce a ! value for the second half. So why require that all fields by uninhabited?

The answer is that, using unsafe code, it is possible to partially initialize a value like (u32, !). In other words, you could create code that just uses the first field, and ignores the second one. In fact, this is even quite reasonable! To see what I mean, consider a type like Uninit, which allows one to manipulate values that are possibly uninitialized (similar to the one introduced in RFC 1892):

union Uninit<T> {
  value: T,
  uninit: (),
}

Note that the contents of a union are generally only known to be valid when the fields are actually accessed (in general, unions may have fields of more than one type, and the compiler doesn’t known which one is the correct type at any given time – hopefully the programmer does).

Now let’s consider a function foo that uses Uninit. foo is generic over some type T; this type gets constructed by invoking the closure op:

fn foo<T>(op: impl FnOnce() -> T) {
  unsafe {
    let x: Uninit<(u32, T)> = Uninit { uninit: () };
    x.value.0 = 22; // initialize first part of the tuple
    ...
    match x.value {
      (v, _) => {
        // access only first part of the tuple
      }
    }
    ...
    x.value.1 = op(); // initialize the rest of the tuple
    ...
  }
}

For some reason, in this code, we need to combine the result of this closure (of type T) with a u32, and we need to manipulate that u32 before we have invoked the closure (but probably after too). So we create an uninitialized (u32, T) value, using Uninit:

    let x: Uninit<(u32, T)> = Uninit { uninit: () };

Then we initialize just the x.value.0 part of the tuple:

    x.value.0 = 22; // initialize first part of the tuple

Finally, we can use operations like match (or just direct field access) to pull out parts of that tuple. In so doing, we are careful to ignore (using _) the parts that are not yet initialized:

    match x.value {
      (v, _) => {
        // access only first part of the tuple
      }
    }

Now, everything here is hunky-dory, right? Well, now what happens if I invoke foo with a closure op that never returns? That closure might have the return value ! – and now x has the type Uninit<(u32, !)>. This tuple (u32, !) is supposed to be uninhabited, and yet here we are initializing it (well, the first half) and accessing it (well, the first half). Is that ok?

In fact, when we first enabled full exhaustivness checking and so forth, we hit code doing exactly patterns like this. (Ony that code wasn’t yet using a union like Uninit – it was using mem::uninitialized, which creates problems of its own.)

In general, a goal for the auto-never rules was that they would only apply when there is no matchable data accessable from the value. In the case of a type like (u32, !), it may be (as we have seen) that there is usable data (the u32); so if we accepted match x { } that would mean that one could still add a pattern like (x, _) which would (a) extract data and (b) not by dead code and (c) not be UB. Seems bad.

Reference patterns and linting

Now that we are armed with this idea of ! and the auto-never transformation, we can examine the problem of reference types, which turns out to be the primary case where the needs of safe and unsafe code come into conflict.

Throughout this post, I’ve been assuming that we want to treat values of types like &! as effectively “uninhabited” – this follows from the fact that we want Result<String, !> to be something that you can work with ergonomically in safe code. Since a common thing to do is to use as_ref() to transform a &Result<String, !> into a Result<&String, &!>, I think we would still want the compiler to understand that the Err variant ought to be treated as impossible in such a type.

Unfortunately, when it comes to unsafe code, there is a general desire to treat any reference &T “with suspicion”. Specifically, we don’t want to make the assumption that this is a reference to valid, initialized memory unless we see an explicit dereference by the user. This is really the heart of the “access-based” philosophy.

But that implies that a value of type &! ought not be considered uninhabited – it might be a reference to uninitialized memory, for example, that is never intended to be used.

If we indeed permit you to treat &! values as uninhabited, then we are making it so that match statements can “invisibily” insert dereferences for you that you might not expect. That seems worrisome.

Auto-never patterns gives us a way to resolve this impasse. For example, when matching on a &! value, we can insert the &! pattern automatically – but lint if that occurs in an unsafe function or a function that contains an unsafe block (or perhaps a function that manipulates raw pointers). Users can then silence the lint by writing out a &! pattern explicitly. Effectively, the lint would enforce the rule that “in and around unsafe code, you should write out &! patterns explicitly, but in safe code, you don’t have to”.

Alternatively, we could limit the auto-never transformation so that &T types do not “auto-never” – but that imposes an ergonomic tax on safe code.

Conclusion

This post describes the idea of a “never pattern” (written !) that matches against the ! type or any other “empty enum” type. It also describes an auto-never transformation that inserts such patterns into matches. As a result – in the desugared case, at least – we no longer use the absence of a match arm to designate matches against uninhabited types.

Explicit ! patterns make it easier to define what data a match will access. They also give us a way to use lints to help bridge the needs of safe and unsafe code: we can encourage unsafe code to write explicit ! patterns where they might help document subtle points of the semantics, without imposing that burden on safe code.

Robert O'CallahanThe Parallel Stream Multiplexing Problem

Imagine we have a client and a server. The client wants to create logical connections to the server (think of them as "queries"); the client sends a small amount of data when it opens a connection, then the server sends a sequence of response messages and closes the connection. The responses must be delivered in-order, but the order of responses in different connections is irrelevant. It's important to minimize the start-to-finish latency of connections, and the latency between the server generating a response and the client receiving it. There could be hundreds of connections opened per second and some connections produce thousands of response messages. The server uses many threads; a connection's responses are generated by a specific server thread. The client may be single-threaded or use many threads; in the latter case a connection's responses are received by a specific client thread. What's a good way to implement this when both client and server are running in the same OS instance? What if they're communicating over a network?

This problem seems quite common: the network case closely resembles a Web browser fetching resources from a single server via HTTP. The system I'm currently working on contains an instance of this internally, and communication between the Web front end and the server also looks like this. Yet even though the problem is common, as far as I know it's not obvious or well-known what the best solutions are.

A standard way to handle this would be to multiplex the logical connections into a single transport. In the local case, we could use a pair of OS pipes as the transport, a client-to-server pipe to send requests and a server-to-client pipe to return responses. The client allocates connection IDs and the server attaches connection IDs to response messages. Short connections can be very efficient: a write syscall to open a connection, a write syscall to send a response, maybe another write syscall to send a close message, and corresponding read syscalls. One possible problem is server write contention: multiple threads sending responses must make sure the messages are written atomically. In Linux this happens "for free" if your messages are all smaller than PIPE_BUF (4096), but if they aren't you have to do something more complicated, the simplest being to hold a lock while writing to the pipe, which could become a bottleneck for very parallel servers. There is a similar problem with client read contention, which is mixed up with the question of how you dispatch received responses to the thread reading from a connection.

A better local approach might be for the client to use an AF_UNIX socket to send requests to the server, and with each request message pass a file descriptor for a fresh pipe that the server should use to respond to the client. It requires a few more syscalls but client threads require no user-space synchronization, and server threads require no synchronization after the dispatch of a request to a server thread. A pool of pipes in the client might help.

The network case is harder. A naive approach is to multiplex the logical connections over a TCP stream. This suffers from head-of-line-blocking: a lost packet can cause delivery of all messages to be blocked while the packet is retransmitted, because all messages across all connections must be received in the order they were sent. You can use UDP to avoid that problem, but you need encryption, retransmits, congestion control, etc so you probably want to use QUIC or something similar.

The Web client case is interesting. You can multiplex over a WebSocket much like a TCP stream, with the same disadvantages. You could issue an HTTP request for each logical connection, but this would limit the number of open connections to some unknown maximum, and could have even worse performance than the Websocket if the browser and server don't negotiate QUIC + HTTP2. A good solution might be to multiplex the connections into a RTCDataChannel in non-ordered mode. This is probably quite simple to implement in the client, but fairly complex to implement in the server because the RTCDataChannel protocol is complicated (for good reasons AFAIK).

This multiplexing problem seems quite common, and its solutions interesting. Maybe there are known best practices or libraries for this, but I haven't found them yet.

Cameron KaiserTenFourFox FPR9b2 available

TenFourFox Feature Parity Release 9 beta 2 is now available (downloads, hashes, release notes). This version tightens up the geometry on the date/time pickers a little, adds some more hosts to basic adblock, fixes a rare but easily wallpapered crash bug and further tunes up hash tables using a small patch from Firefox 63 (!). I am looking at a new JavaScript issue which does not appear to be a regression, but I'd like to fix it anyway since it may affect other sites. However, I'm not sure if this is going to make FPR9 final, which is still scheduled on or about September 4 due to the American Labor Day holiday on the usual Monday.

The WiFi fix in beta 1 was actually to improve HTML5 geolocation accuracy, and Chris T has confirmed that it does, so that's been updated in the release notes. Don't worry, you are always asked before your location is sent to a site.

On the Talos II side, I've written an enhancement to KVMPPC allowing it to actually monkeypatch Mac OS X with an optimized bcopy in the commpage. By avoiding the overhead of emulating dcbz's behaviour on 32-bit PPC, this hack improves the T2's Geekbench score by almost 200 points in Tiger. Combined with another small routine to turn dcba hints into nops so they don't cause instruction faults, this greatly reduces stalls and watchdog tickles when running Mac apps in QEMU. I'll have a formal article on that with source code for the grubby proletariat shortly, plus a big surprise launch of something I've been working on very soon. Watch this space.

Daniel StenbergA hundred million cars run curl

One of my hobbies is to collect information about where curl is used. The following car brands feature devices, infotainment and/or navigation systems that use curl - in one or more of their models.

These are all brands about which I've found information online (for example curl license information), received photos of or otherwise been handed information by what I consider reliable sources (like involved engineers).

Do you have curl in a device installed in another car brand?

List of car brands using curl

Baojun, BMW, Buick, Cadillac, Chevrolet, Ford, GMC, Holden, Hyundai, Mazda, Mercedes, Nissan, Opel, Renault, Seat, Skoda, Subaru, Suzuki, Tesla, Toyota, VW and Vauxhall.

All together, this is a pretty amazing number of installations. This list contains eight (8) of the top-10 car brands in the world 2017! And all the top-3 brands. By my rough estimate, something like 40 million cars sold in 2017 had curl in them. Presumably almost as many in 2016 and a little more in 2018 (based on car sales stats).

Not too shabby for a little spare time project.

How to find curl in your car

Sometimes the curl open source license is included in a manual (it includes my name and email, offering more keywords to search for). That's usually how I've found out many uses purely online.

Sometimes the curl license is included in the "open source license" screen within the actual infotainment system. Those tend to list hundreds of different components and without any search available, you often have to scroll for many minutes until you reach curl or libcurl. I occasionally receive photos of such devices.

Related: why is your email in my car and I have toyota corola.

Update: I added Tesla and Hyundai to the list after the initial post. The latter of those brands is a top-10 brand which bumped the counter of curl users to 8 out of the top-10 brands!

Mike HommeyAnnouncing git-cinnabar 0.5.0

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.4.0?

  • git-cinnabar-helper is now mandatory. You can either download one with git cinnabar download on supported platforms or build one with make.
  • Performance and memory consumption improvements.
  • Metadata changes require to run git cinnabar upgrade.
  • Mercurial tags are consolidated in a separate (fake) repository. See the README file.
  • Updated git to 2.18.0 for the helper.
  • Improved memory consumption and performance.
  • Improved experimental support for pushing merges.
  • Support for clonebundles for faster clones when the server provides them.
  • Removed support for the .git/hgrc file for mercurial specific configuration.
  • Support any version of Git (was previously limited to 1.8.5 minimum)
  • Git packs created by git-cinnabar are now smaller.
  • Fixed incompatibilities with Mercurial 3.4 and >= 4.4.
  • Fixed tag cache, which could lead to missing tags.
  • The prebuilt helper for Linux now works across more distributions (as long as libcurl.so.4 is present, it should work)
  • Properly support the pack.packsizelimit setting.
  • Experimental support for initial clone from a git repository containing git-cinnabar metadata.
  • Now can successfully clone the pypy and GNU octave mercurial repositories.
  • More user-friendly errors.

Development process changes

It took about 6 months between version 0.3 and 0.4. It took more than 18 months to reach version 0.5 after that. That’s a long time to wait for a new version, considering all the improvements that have happened under the hood.

From now on, the release branch will point to the last tagged release, which is roughly the same as before, but won’t be the default branch when cloning anymore.

The default branch when cloning will now be master, which will receive changes that are acceptable for dot releases (0.5.x). These include:

  • Changes in behavior that are backwards compatible (e.g. adding new options which default to the current behavior).
  • Changes that improve error handling.
  • Changes to existing experimental features, and additions of new experimental features (that require knobs to be enabled).
  • Changes to Continuous Integration/Tests.
  • Git version upgrades for the helper.

The next branch will receive changes for the next “major” release, which as of writing is planned to be 0.6.0. These include:

  • Changes in behavior.
  • Changes in metadata.
  • Stabilizing experimental features.
  • Remove backwards compability with older metadata (< 0.5.0).

Mozilla VR BlogThis Week in Mixed Reality: Issue 15

This Week in Mixed Reality: Issue 15

This week is mainly about bug fixing and getting some new features to launch.

Browsers

Firefox Reality is in the bug fixing phase, keeping the team very busy:

  • The team has reviewed the report from testing with actual users. Lots of changes in progress.
  • Burning down bugs in Firefox Reality. Big ones this week include refactoring immersive mode and improving loading times
  • Fix broken OAuth logins and opening pages in new windows.

Social

A bunch of bug fixes and improvements to Hubs by Mozilla:

  • Support for single sided objects to reduce rendering time on things like walls where you only need to see one side
  • Fixes for the pen drawing tool

Content Ecosystem

Hacks.Mozilla.OrgMDN Changelog for July 2018: CDN tests, Goodbye Zones, and BCD

Editor’s note: A changelog is “a log or record of all notable changes made to a project. [It] usually includes records of changes such as bug fixes, new features, etc.” Publishing a changelog is kind of a tradition in open source, and a long-time practice on the web. We thought readers of Hacks and folks who use and contribute to MDN Web Docs would be interested in learning more about the work of the MDN engineering team, and the impact they have in a given month. We’ll also introduce code contribution opportunities, interesting projects, and new ways to participate.

Here’s what happened in July to the code, data, and tools that support MDN Web Docs:

Here’s the plan for August:

Done in July

Experimented with longer CDN expirations

We moved MDN Web Docs to a CDN in April 2018, and saw a 16% improvement in page load times. We shipped with 5 minute expiration times for MDN pages, so that the CDN will request a fresh copy after a short time. MDN is a wiki, and we can’t predict when a page will change. 300 seconds was a compromise between some caching for our most popular pages, and how long an author would need to wait for a changed page to be published to all visitors. 80% of visitors are getting an uncached page.

Longer cache expirations would require cache invalidation, one of the two hard things in computer science. Before committing to the work, we wanted to estimate the expected performance benefits. From July 9 to 15, Ryan Johnson bumped the timeout from 5 minutes to 48 hours (PR 4876), and we gathered the performance data.

Average page load time decreased 3% over the previous week, a small and not significant improvement. The results for different countries was mixed, some slightly improved, and some slightly worse. The outlier was China, where average page load time increased 22%, a significant decrease in performance.

A graph comparing page load time during the experiment vs. the previous week.

Page load time in China was worse, 60% longer on July 13

The page load time varied on weekdays versus weekends as well (positive percents are shorter page load times, better for users):

Country Page Load Decrease,
Weekday
Page Load Decrease,
Weekend
All 1% -2%
USA 3% 3%
India 2% -7%
China -22% -35%
Japan 0% 10%
France -1% -5%
Germany 3% 3%
UK 2% 2%
Russia 0% 2%
Brazil 2% -2%
Ukraine 6% 1%

This is a successful experiment. We got an unexpected result, with minimal work to get those results. At the same time, we’re curious why the longer CDN expiration had little effect for most users, and a negative effect for China. We have some theories.

CloudFront is Amazon’s CDN, and uses the same data centers and networks as MDN’s servers. MDN is optimized for quickly serving wiki pages, so a cache miss adds only 50-100 milliseconds to a request. The primary benefit of the CDN is reducing server load, and we did see a 25% – 50% reduction in requests made to the servers, especially during peak hours.

We’re currently directing CloudFront to cache pages, but telling downstream proxies and browsers not to cache the pages. A wiki page can change after someone edits it, and we wanted to avoid several layers of caches holding on to stale copies. Downstream caches may have a bigger impact than we expect on page load, and we can try allowing caching in the next experiment.

China has country-wide policies to monitor and control internet traffic. We don’t know the details, but longer caching times result in slower processing. We saw an improvement in China moving developer.mozilla.org to CloudFront, lowering the average page load time by 30%. It is possible that most of the benefit was due to removing a second domain lookup for assets. A future experiment may skip CloudFront for traffic from China.

There’s a significant difference between weekday and weekend traffic in some countries, like China and Japan. Our guess is that weekday traffic is dominated by developers using MDN for work, weekend traffic by developers using MDN for hobbies and learning. We also suspect there are differences between the capabilities of work week devices and home devices.

Finally, the results may be a limitation of CloudFront, and we would see different results with a different CDN provider.

We’ll look elsewhere for ways to speed up our page load times. For example, Schalk Neethling is working to replace icons via webfonts with SVG icons (PR 4860), and inlining short JavaScript files rather than making a request (PR 4881). We have further plans for reducing page load time, to meet our new performance goals.

Decommissioned zones

Ryan Johnson removed zones on July 24, merging PR 4853. From a user’s perspective, there are a few changes.

Custom zone URLs, like https://developer.mozilla.org/en-US/Firefox/Releases/61, are now at standard wiki URLs under /docs/, like https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Releases/61. There are redirects to the new URLs, so old links should continue working.

Custom zone styling is removed, and zone pages now look like other wiki pages. This is subtle on most pages, such as removing an icon next to the title. Other pages required a re-write, such as The History of MDN.

On the Progressive Web Apps MDN page, the zone style has an icon next to the tile, that it will lose without zone styles.

The subtle change when removing zone styles

Zone sidebars were converted to KumaScript sidebars, and added to each page in the zone, through the heroic efforts of wbamberg (PR 711 and a few others).

About 2600 lines of code were removed, about 10% of the codebase. The wiki code is now simpler, less error prone, and safer to update.

Converted compatibility data

In July of last year, the Browser Compatibility Data (BCD) project hit the milestone of over 1000 MDN pages using the new compatibility data, with about 4900 to convert. This month, there are less than 850 pages left to convert, and over 5000 MDN pages are using the new data. The steady work of the BCD team has made a huge impact on MDN and the community.

Visual Studio Code improved the accuracy of their data by adopting the BCD project in the June 2018 release. This was proposed by Pine in vscode-css-languageservice issue #102 and implemented in PR #105, with feedback from BCD and mdn/data contributor Connor Shea.

The compatibilty data is available as a tooltip when editing

Data from BCD in VS Code as seen in Visual Studio Code.

After a long discussion, the BCD project has updated the policy for Node.js versions numbers (PR 2196, PR 2294, and others). At first, browser-style version numbers were used, such as “4”, “6”, and “8”, but the Node.js community requested “4.0.0”, “6.0.0”, and “8.0.0”, to reflect how they think of release numbers. This affected lots of files and unstuck several Node.js pull requests.

Florian Scholz went on vacation, and Daniel D. Beck took the lead on project maintenance, including shipping the npm package, now documented via PR 2480. Most of the PRs from the Paris Hack on MDN event are now merged or closed, and the project is down to 120 open PRs, representing about half of the remaining conversion work.

Shipped Tweaks and Fixes

There were 307 PRs merged in July:

58 of these were from first-time contributors:

Other significant PRs:

Planned for August

In August, we’ll continue working on new and improved interactive examples, converting compatibility data (aiming for less than 50 open PRs), switching to Python 3, improving performance, and other long-term projects.

Upgrade to Elasticsearch 5.6

Elasticsearch powers our little-loved site search, and we’re using version 2.4 in production. This version went out of support in February 2018, but our provider gave us until August to update. We used that grace period to update from Django 1.8 to 1.11. In August, we’ll update our client libraries and code so we can update to Elasticsearch 5.6, the next major release. We don’t expect many user-visible changes with the new server, but we also don’t plan to lose site search due to missing the deadline.

QMOFirefox DevEdition 62 Beta 18 Testday, August 17th

Greetings Mozillians!

We are happy to let you know that Friday, August 17th, we are organizing Firefox 62 DevEdition Beta 18 Testday. We’ll be focusing our testing on  Activity Stream, React Animation Inspector and Toolbars & Window Controls features. We will also have fixed bugs verification  and unconfirmed bugs triage ongoing.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Support.Mozilla.Org#5 State of Mozilla Support: 2018 Mid-year Update – Part 5

Hello, present and future Mozillians!

We are happy to share with you the final post of the series, which started with two external research report analyses, moved on to sharing updates and plans for support forums, social support, and localization, and now is about to conclude with our strategic summary.

The presentation that is the source of this post can be accessed here. The document is meant to be a set of recommendations for the Marketing team’s leadership as to the state and direction of Support (the Support team being a part of Marketing as of mid-2018).

An important disclaimer before we dive into the summary: as it is customary with projects and ideas, everything described below as a future move or plan is not set in stone and may not happen or can significantly change in nature or details, depending on many factors. That said, this summary should give you a general idea on where we are coming from and where we are headed to as Mozilla’s Support.

The recommendations are a result of external research, data analysis, and recent experiments. In general, we learned that:

  • Our site is not delivering optimal support the way it could (when compared to other support sites)
  • Our approach should probably be more tuned to specific product requirements (not a “one-size-fits-all” way of doing things)
  • Our community is stretched to its limits and needs more support and growth
  • We need to look into alternative approaches and experimental methods that may contradict our “old ways”
  • We can and should participate in shaping product development through the insights our community and users provide

Thus, the Support vision within Mozilla is evolving from “Partner with the Mozilla community to maximize user success and happiness.” into “People seek out support when they have a problem while using our products. We need to be there for them in ways they expect and in unexpected ways that will delight. We deliver product support that earns user’s trust, helps them take control, and empowers them to do more online.”

There is no reason for alarm due to the “Mozilla community” part missing from the updated vision. Just like all of Mozilla, Support happens in a huge part thanks to the tireless engagement of hundreds of people around the world. Going forward, the community should not be its only engine and driver. Meanwhile, the focus on the user through many different means (sometimes experimental) is at the core of Support’s vision for 2018 and beyond.

Further integration of Support into Mozilla’s overall product strategy means consciously diversifying between solid support for our flagship product (Firefox) while being agile and flexible about new and challenging projects coming from different parts of Mozilla that require support – be it Knowledge Base, Social, support forums, 1:1 or any other format.

For Firefox support, this means focusing on what we already know works and making it work better. For new products, we may want to try new ways of delivering support that step outside of what we have been doing so far. These new, experimental ways may be later expanded into Firefox support. On both fronts, Support will also focus on delivering interesting and impactful insights that shape what the future of Mozilla’s products.

The above is broken down into five separate recommendations, described in more detail below.

Securing the foundation

With a huge number of users visiting the Support site every day for quality help powered by a small group of core contributors, we do not have a stable and solid foundation at the moment.

To avoid running into a one-way street and not delivering support to our users, we want to develop and redesign our community approach with the help of the Open Innovation team. This will come through a series of research explorations and experiments taking place in 2018.

The platform itself should also receive a few tweaks thanks to a more streamlined support from the Marketing Developer team.

Some of the options considered for this segment are:

  • Contextual recognition and unobtrusive gamification for our existing core contributors.
  • Prototyping a DIY learning program and experimenting with changing community communication channels.
  • Combining community coordination with Mission Driven Mozillians and the core Localization team.
  • Experimenting with pay-per-use services as backups.
  • Investing time and resources into pushing Social support to a new level.

Improving user experience

The Support site has not been reviewed or streamlined for user experience in the recent years, resulting in its current design being dated and hard to navigate. With site search hobbled by technical challenges and lack of development, the old information architecture is not enough to help users find the information they need.

Researching the site’s usability and reworking its visual appeal are key to changing the current state. For this to happen, we need to have technical and visual experts within Mozilla make the experience both modern and in-tune with Mozilla’s new aesthetics and back-end requirements.

As is the case in the most recent years around the web, mobile formats keep being an important part of the user experience, so improving site performance for those on mobile devices is definitely a priority in the coming months.

Improving search (both within the site and as SEO for popular 3rd party search engines out there) is also a priority, although we are doing moderately well when it comes to content discovery on the Support site from the wider web.

An interesting direction of experimentation is the idea of having separate product support sites that may all fall under the Support umbrella, but with different content organization and presentation. This is in very early stages of discussion, so at the moment we can’t offer any more details.

The entire process should be as transparent and agile as we can make it, but ultimately it will involve some tough calls on what we need to change that may be outside of the hands of the community. We hope you trust us enough to make the site better based on the data and research available during the redesign period.

Delivering insights, mapping impact

Over the years, we have amassed quite a big stash of data that (if used with the right focus and purpose) could help us make the Support site itself much better, but also help the teams working on Mozilla’s products make well-informed decisions about development and patching priorities.

What we are finding challenging without additional resources is surfacing and organizing all of this data into a coherent set of insights.

For this to happen, we first could prototype improving internal reporting and automate as much of it as possible if the prototype reports prove useful.

Reworking some of our key metrics (for example through adding Customer Satisfaction measurement in all places where Support happens) and improving the technical side of reporting (through deployment of new community dashboards based on Bitergia) is another set of potential developments in this area.

All of the insights gathered and forwarded to either the developers or community members should help us connect the influence Support activities and resources with user retention or other relevant product metrics.

For the above to happen, we need to work on identifying product metrics that the Product teams need from us and then expand the existing dashboards with additional data or representation methods.

Experimenting with new methods

At this moment, the only way support.mozilla.org stays active and useful is through the tireless and humbling engagement by our community members, who easily belong among the most ardent fans of Mozilla’s vision of the web.

This tried and tested method of providing support is not going away – but in order to adapt to the new directions Mozilla wants to explore, Support needs to flex a bit and get out of its “comfort zone”. New challenges mean new approaches, so there is a lot of space for trying things out (and succeeding or failing – we want to be ready for both options!).

What could some of those brave new worlds we want to explore be? The Google App Store experiment worked out quite well, so going further down that road is definitely on the table as an option.

Getting a friendly robotic mind to help out from time to time is also in tune with the future. Automated (but friendly!) support options could include email queues or chatbots. But code is not our only ally out there – we can also consider stepping outside of support.mozilla.org and reaching out with more resources to external communities (for example Reddit’s /r/firefox).

Finally, another area to explore, albeit quite costly from the perspective of time and resource investments, can be found on YouTube, where many people look for instructional or “useful tips” content.

Since these new areas need a lot of preparation, the rest of 2018 is an exploratory and brainstorming period in that respect, with more to come in 2019, especially through collaboration with Open Innovation on participation systems and alternative approaches.

Customizing product support

You could say that Firefox is the flagship product of Mozilla at this moment – and you would not be wrong at all. Even so, it has many faces and aspects that very often require slightly different approaches. But Firefox is not everything that Mozilla plans to offer in the (near) future. New products may benefit more from support solutions that have not been used on support.mozilla.org yet.

With the upcoming new flavours of Firefox and products beyond that, we might want to consider creating customized support strategies and tools for communities (but not only) to get involved through.

Giving new tools and new approaches a chance requires a very good understanding of where we are and where we could easily get without overinvesting time and energy in a complete overhaul of Support. It also means partnering much closers with Product teams on their needs and engagement with Support in the future.

As this is yet another area we are hoping to boldly go into (but have little experience as of now), it’s targeted mostly for next year, rather than 2018.

What are we NOT planning on doing?

Now that you know a bit more and you may start wondering “how could all this possible happen?” (don’t worry, you’re not the only one asking that question), it is good to make sure that we make clear what is not going to happen in the near future.

We are not planning to significantly redesign the current contribution experience and tools on support.mozilla.org (the idea is rather to expand or synthesize it).

We are also not going to invest time and effort into replacing the current support platform, since we have not fully explored its potential yet.

…and more

Being a part of Mozilla is exciting and challenging at all times – and the future is not going to bring anything less ;-) In 2018 and afterwards, the Support site is going to be involved in and impacted by the changes to Firefox as a brand (and as a bundle of interconnected products and services that are not only the browser itself), as well as the continuous integration of Pocket into Mozilla (which means expanding its available locales). Our platform will need to be revised and updated accordingly to match the requirements of the road ahead.

Whatever future steps we take or directions we look into, we want you to be a part of that journey. Together, we have gone through quite a few bumpy moments, and not having you as part of our community would make reaching new horizons harder and less fun. As always, we want to thank you for being there for users worldwide and for making Mozilla (and its Support) happen.

Onwards, towards the exciting unknown! :-)

Daniel StenbergHow to DoH-only with Firefox

Firefox supports DNS-over-HTTPS (aka DoH) since version 62.

You can instruct your Firefox to only use DoH and never fall-back and try the native resolver; the mode we call trr-only. Without any other ability to resolve host names, this is a little tricky so this guide is here to help you. (This situation might improve in the future.)

In trr-only mode, nobody on your local network nor on your ISP can snoop on your name resolves. The SNI part of HTTPS connections are still clear text though, so eavesdroppers on path can still figure out which hosts you connect to.

There's a name in my URI

A primary problem for trr-only is that we usually want to use a host name in the URI for the DoH server (we typically need it to be a name so that we can verify the server's certificate against it), but we can't resolve that host name until DoH is setup to work. A catch-22.

There are currently two ways around this problem:

  1. Tell Firefox the IP address of the name that you use in the URI. We call it the "bootstrapAddress". See further below.
  2. Use a DoH server that is provided on an IP-number URI. This is rather unusual. There's for example one at 1.1.1.1.

Setup and use trr-only

There are three prefs to focus on (they're all explained elsewhere):

network.trr.mode - set this to the number 3.

network.trr.uri - set this to the URI of the DoH server you want to use. This should be a server you trust and want to hand over your name resolves to. The Cloudflare one we've previously used in DoH tests with Firefox is https://mozilla.cloudflare-dns.com/dns-query.

network.trr.bootstrapAddress- when you use a host name in the URI for the network.trr.uri pref you must set this pref to an IP address that host name resolves to for you. It is important that you pick an IP address that the name you use actually would resolve to.

Example

Let's pretend you want to go full trr-only and use a DoH server at https://example.com/dns. (it's a pretend URI, it doesn't work).

Figure out the bootstrapAddress with dig. Resolve the host name from the URI:

$ dig +short example.com
93.184.216.34

or if you prefer to be classy and use the IPv6 address (only do this if IPv6 is actually working for you)

$ dig -t AAAA +short example.com
2606:2800:220:1:248:1893:25c8:1946

dig might give you a whole list of addresses back, and then you can pick any one of them in the list. Only pick one address though.

Go to "about:config" and paste the copied IP address into the value field for network.trr.bootstrapAddress. Now TRR / DoH should be able to get going. When you can see web pages, you know it works!

DoH-only means only DoH

If you happen to start Firefox behind a captive portal while in trr-only mode, the connections to the DoH server will fail and no name resolves can be performed.

In those situations, normally Firefox's captive portable detector would trigger and show you the login page etc, but when no names can be resolved and the captive portal can't respond with a fake response to the name lookup and redirect you to the login, it won't get anywhere. It gets stuck. And currently, there's no good visual indication anywhere that this is what happens.

You simply can't get out of a captive portal with trr-only. You probably then temporarily switch mode, login to the portal and switch the mode to 3 again.

If you "unlock" the captive portal with another browser/system, Firefox's regular retries while in trr-only will soon detect that and things should start working again.

Mozilla Reps CommunityRep of the Month – July 2018

Please join us in congratulating Lívia Takács, our Rep of the Month for July 2018!

Livia is a UI developer and visual designer from Hungary and has been part of the Reps program for a bit more than a year. In that time she organized a lot of events with different communities (like LibreOffice) and also workshops.

She also organizes Mozilla Hungary community meetings and a few months ago she organized the Firefox Support Sprint in Budapest.

She is a mentor and teacher at specific workshops where she teaches JavaScript and localization, such as MozSkool teaching JavaScript to 15-18 years old girls and MozScope localization workshops. This is a great example that a community builder can organize events but also mentor and teach new people to help growing the local community.

Thanks Lívia, keep rocking the Open Web! :tada: :tada:

Please head over to Discourse to congratulate her!

The Firefox FrontierMake your Firefox browser a privacy superpower with these extensions

Privacy is important for everyone, but often in different ways. That’s part of why Firefox Extensions are so powerful. Starting with a browser like Firefox, that’s built for privacy out … Read more

The post Make your Firefox browser a privacy superpower with these extensions appeared first on The Firefox Frontier.

Hacks.Mozilla.OrgAV1 and the Video Wars of 2027

Author’s Note: This post imagines a dystopian future for web video, if we continue to rely on patented codecs to transmit media files. What if one company had a perpetual monopoly on those patents? How could it limit our access to media and culture? The premise of this cautionary tale is grounded in fact. However, the future scenario is fiction, and the entities and events portrayed are not intended to represent real people, companies, or events.

Illustration by James Dybvig

The year is 2029. It’s been two years since the start of the Video Wars, and there’s no end in sight. It’s hard to believe how deranged things have become on earth. People are going crazy because they can’t afford web video fees – and there’s not much else to do. The world’s media giants have irrevocably twisted laws and governments to protect their incredibly lucrative franchise: the right to own their intellectual property for all time.

It all started decades ago, with an arcane compression technology and a cartoon mouse. As if we needed any more proof that truth is stranger than fiction.

Adulteration of the U.S. Legal System

In 1998, the U.S. Congress passed the Sonny Bono Copyright Term Extension Act. This new law extended copyrights on corporate works to the author’s lifetime plus 95 years. The effort was driven by the Walt Disney Company, to protect its lucrative retail franchise around the animated character Mickey Mouse. Without this extension, Mickey would have entered the public domain, meaning anyone could create new cartoons and merchandise without fear of being sued by Disney. When the extension passed, it gave Disney another 20 years to profit from Mickey. The news sparked outrage from lawyers and academics at the time, but it was a dull and complex topic that most people didn’t understand or care about.

In 2020, Disney again lobbied to extend the law, so its copyright would last for 10,000 years. Its monopoly on our culture was complete. No art, music, video, or story would pass into the public domain for millennia. All copyrighted ideas would remain the private property of corporations. The quiet strangulation of our collective creativity had begun.

A small but powerful corporate collective called MalCorp took note of Disney’s success. Backed by deep-pocketed investors, MalCorp had quietly started buying the technology patents that made video streaming work over the internet. It revealed itself in 2021 as a protector of innovation. But its true goal was to create a monopoly on video streaming technology that would last forever, to shunt profits to its already wealthy investors. It was purely an instrument of greed.

Better Compression for Free

Now, there were some good guys in this story. As early as 2007, prescient tech companies wanted the web platform to remain free and open to all – especially for video. Companies like Cisco, Mozilla, Google, and others worked on new video codecs that could replace the patented, ubiquitous H.264 codec. They even combined their efforts in 2015 to create a royalty-free codec called AV1 that anyone could use free of charge.

AV1 was notable in that it offered better compression, and therefore better video quality, than any other codec of its time. But just as the free contender was getting off the ground, the video streaming industry was thrown into turmoil. Browser companies backed different codecs, and the market fragmented. Adoption stalled, and for years the streaming industry continued paying licensing fees for subpar codecs, even though better options were available.

The End of Shared Innovation

Meanwhile MalCorp found a way to tweak the law so its patents would never expire. It proposed a special amendment, just for patent pools, that said: Any time any part of any patent changes, the entire pool is treated as a new invention under U.S. law. With its deep pockets, MalCorp was able to buy the votes needed to get its law passed.

MalCorp’s patents would not expire. Not in 20 years. Not ever. And because patent law is about as interesting as copyright law, few protested the change.

Things went downhill quickly for advocates of the open web. MalCorp’s patents became broader, vaguer, ever-changing. With billions in its war chest, MalCorp was able to sue royalty-free codecs like AV1 out of existence. MalCorp had won. It had a monopoly on web streaming technology. It began, slowly at first, to raise licensing fees.

Gorgeous Video, Crushing Fees

For those who could afford it, web video got much better. MalCorp’s newest high-efficiency video codecs brought pixel-perfect 32K-Strato-Def images and 3D sound into people’s homes. Video and audio were clear and rich – better than real life. Downloads were fast. Images were crisp and spectacular. Fees were high.

Without access to any competing technologies, streaming companies had to pay billions instead of millions a year to MalCorp. Streaming services had to 100x their prices to cover their costs. Monthly fees rose to $4,500. Even students had to pay $50 a minute to watch a lecture on YouTube. Gradually, the world began to wake up to what MalCorp had done.

Life Indoors

By the mid-twenties, the Robotic Age had put most people out of work. The lucky ones lived on fixed incomes, paid by their governments. Humans were only needed for specialized service jobs, like nursery school teachers and style consultants. Even doctors were automated, using up-to-the-minute, crowd-sourced data to diagnose disease and track trends and outbreaks.

People were idle. Discontent was rising. Where once a retired workforce might have traveled or pursued hobbies, growing environmental problems rendered the outside world mostly uninhabitable. People hiked at home with their headsets on, enjoying stereoscopic birdsong and the idea of a fresh breeze. We lived indoors, in front of screens.

Locked In, Locked Out

It didn’t take long for MalCorp to become the most powerful corporation in the world. When video and mixed reality files made up 90 percent of all internet traffic, MalCorp was collecting on every transmission. Still, its greed kept growing.

Fed up with workarounds like piracy sites and peer-to-peer networks, MalCorp dismantled all legacy codecs. The slow, furry, lousy videos that were vaguely affordable ceased to function on modern networks and devices. People noticed when the signal went dark. Sure, there was still television and solid state media, but it wasn’t the same. Soon enough, all hell broke loose.

The Wars Begin

During Super Bowl LXII, football fans firebombed police stations in 70 cities, because listening to the game on radio just didn’t cut it. Thousands died in the riots and, later, in the crackdowns. Protesters picketed Disneyland, because the people had finally figured out what had happened to their democracy, and how it got started.

For the first time in years, people began to organize. They joined chat rooms and formed political parties like VidPeace and YouStream, vying for a majority. They had one demand: Give us back free video on the open web. They put banners on their vid-free Facebook feeds, advocating for the liberation of web video from greedy patent holders. They rallied around an inalienable right, once taken for granted, to be able to make and watch and share their own family movies, without paying MalCorp’s fees.

But it was too late. The opportunity to influence the chain of events had ended years before. Some say the tipping point was in 2019. Others blame the apathy and naiveté of early web users, who assumed tech companies and governments would always make decisions that served the common good. That capitalism would deliver the best services, in spite of powerful profit motives. And that the internet would always be free.

The Servo BlogGSoC wrap-up - Splitting Servo's script crate

Introduction

I am Peter Hrvola (retep007) Twitter Github. During my Google Summer of Code (GSoC) project, I have been working on investigating the monolithic nature of Servo’s script crate and prototyping separation to smaller crates. My goal was to improve the use of resources during compilation. Current debug build consumes over 5GB of memory and takes 347s.

The solution introduces a TypeHolder trait which contains associated types, and makes many structures in the script crate generic over this new trait. This allows the generic structs to refer to the new trait’s associated types, while the actual concrete types can be extracted into a separate crate. Testing shows significant improvement in memory consumption (25% lower) and build time (27% faster).

The process

For prototyping, I have been using two repositories. One contains a stripped-down, minimal script crate with only a few implementations of complex script types and no build-time code generation. This minimal script crate was used to investigate ideas. The second repository is a complete fork of the entire Servo repository to ensure that the ideas could actually work in Servo itself.

I have started to work on project very early. During community bonding period I wanted to make a few small pull requests to separate some isolated parts of script crate. As it turned out, there is no low hanging fruit in script crate. I have quickly encountered many unexpected issues, so I continuously moved to the original plan.

All original ideas for investigation can be found in GSoC project proposal. However, during the first week of coding, I have experienced many issues and come up with something that is more or less a combination of all three proposed ideas.

The biggest problem that caused most of the troubles and pain is that Rust at the time of doing this project did not support generic consts RFC.

After two months of solving errors, I have been finally able to compile full servo and take some measurements about which I talk later in this blog post.

The original GSoC project assignment was to prototype ways of splitting script crate, but since results were quite promising and I had a month of coding left I started to prepare a PR to Servo master. During the prototyping period, I have made a few ugly hacks to speed up development. To properly fix these hacks, I needed to modify the build-time code generation to generate generic code that used the TypeHolder trait, as well as find replacements for thread local variables that needed to store generic values.

How it works

The final idea of separation is based on using the TypeHolderTrait. TypeHolderTrait is a trait with type parameters for every WebIDL interface that we want to extract from the script crate. TypeHolderTrait itself is defined in script crate with all type parameter traits. However, it is implemented outside of the script crate so it can provide concrete types. TypeHolder enables us to use constructs like static methods in script crate. Later, we use TypeHolder as type parameter for structs and methods that require access to external types. Let’s have origial dom struct like DOMParser:

struct DOMParser {
    fn new_inherited(window: &Window) -> DOMParser {} 

    fn new(window: &Window) -> DomRoot<DOMParser> {}

    pub fn Constructor(window: &Window) -> Fallible<DomRoot<DOMParser>> {}
}

This struct definition is removed from script crate. The DOMParserTrait with public methods is created and added as associated type to TypeHolderTrait which can than be used in place of original DOMParser type.

trait TypeHolderTrait {
    type DomParser: DomParserTrait<Self>
}


trait DOMParserTrait<TH: TypeHolderTrait>: DOMParserMethods<TH> {
    fn Constructor(window: &Window<TH>) -> Fallible<DomRoot<TH::DOMParser>>;
}

Effects on Servo

Modifications in final PR should have only minimal effects on Servo speed. However, the codebase has undergone big surgery: over 12000 modified lines 🏥! The most intrusive change is to use TypeHolder where it is required. It turns out that TypeHolder is needed in a lot of places. Leading cause of such a large number of changes is mainly GlobalScope which is used all around the codebase and needs to be generic.

Due to lack of Rust support for generic static variables at many places I had to modify the code to use initialization methods to fill static variables early during program initialization. For example, in Bindings, I have added an InitTypeHolded function which replaces the content of mutable static variables with appropriate function calls.

Speeeeed 🚀

I have done testing on MacBook Pro 2015, High Sierra, 2,7 GHz i5 dual core, 16 GB RAM using /usr/bin/time cargo build, which shows maximal memory usage during compilation. Results may vary depending on build machine.

Cargo recompiles only crates that have been modified and crates that depend on modified crates. I have separated script crate to script and script_servoparser. We have taken three samples. One for original Servo. Two after separation I have made. For separated script crate compilation times were measured for each crate separately. Only one crate was modified at a time. However, change in script crate also forces recompilation of script_servoparser.

Resources were measured in this way:

  1. compile full servo
  2. modify files
  3. measure resources used to build a full servo

Unchanged Servo:

  Servo
RAM 5.1GB
Real time 3:49m

Servo after separation to two crates:

  Modified script crate Modified script_servoparser crate
RAM 3.74 GB 2 GB
Real time 2:56m 1:48m

As we can see in the table above, resource usage during compilation has drastically changed. The main reason is that because of a large number of generic structures which postpone parts of compilation to later monomorphization. In future, actual separation of dom structs to upstream crates will lower this number even more. In the current version, only six dom structs were moved outside the script crate.

Future work

At the time of writing this post, there is still continuous work on modifying generic Servo before creating a PR.

Things left to be done:

  1. Fix tests
  2. Performance optimization
  3. Polish PR

Important links

Conclusion

Working on a project for two months without successful compilation and having compiler yelling that you have made 34174 mistakes is a bit scary. However, they say that the more mistakes you make, the more you learn. I guess I have made a lot of mistakes and I have learned a ton as I have constantly been pushing the Rust-lang to its limits in large Servo codebase. All in all, this was an awesome project, and I enjoyed it very much.

I would like to thank my mentor Josh Bowman-Matthews (jdm) for this opportunity. It was such a pleasure to work with him.

David LawrenceHappy BMO Push Day!

https://github.com/mozilla-bteam/bmo/tree/release-20180808.1

the following changes have been pushed to bugzilla.mozilla.org:

  • [1480891] my dashboard does not show the revision id and title for phabricator review requests
  • [1481893] After recent push of bug 1478897 bug/revision syncing has been broken due to coding error

discuss these changes on mozilla.tools.bmo.

Mozilla B-Teamhappy bmo push day!

happy bmo push day!

release tag

the following changes have been pushed to bugzilla.mozilla.org:

  • [1479350] “Phabricator Reviews Requested of You” lists bugs which I have reviewed
  • [1374266] Improve the “Zarro Boogs found” message
  • [1480169] Consider reducing the verbosity of phabricator ‘Revision Approved’ bugzilla comments
  • [1478897] ensure phabbugs doesn’t fail outright when encountering invalid bug ids
  • [1480599] Add…

View On WordPress

Hacks.Mozilla.OrgDweb: Social Feeds with Secure Scuttlebutt

In the series introduction, we highlighted the importance of putting people in control their social interactions online, instead of allowing for-profit companies be the arbiters of hate speech or harassment. Our first installment in the Dweb series introduces Secure Scuttlebutt, which envisions a world where users are in full control of their communities online.

In the weeks ahead we will cover a variety of projects that represent explorations of the decentralized/distributed space. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source, and open for participation, and share Mozilla’s mission to keep the web open and accessible for all.

This post is written by André Staltz, who has written extensively on the fate of the web in the face of mass digital migration to corporate social networks, and is a core contributor to the Scuttlebutt project. –Dietrich Ayala

Getting started with Scuttlebutt

Scuttlebutt is a free and open source social network with unique offline-first and peer-to-peer properties. As a JavaScript open source programmer, I discovered Scuttlebutt two years ago as a promising foundation for a new “social web” that provides an alternative to proprietary platforms. The social metaphor of mainstream platforms is now a more popular way of creating and consuming content than the Web is. Instead of attempting to adapt existing Web technologies for the mobile social era, Scuttlebutt allows us to start from scratch the construction of a new ecosystem.

A local database, shared with friends

The central idea of the Secure Scuttlebutt (SSB) protocol is simple: your social account is just a cryptographic keypair (your identity) plus a log of messages (your feed) stored in a local database. So far, this has no relation to the Internet, it is just a local database where your posts are stored in an append-only sequence, and allows you to write status updates like you would with a personal diary. SSB becomes a social network when those local feeds are shared among computers through the internet or through local networks. The protocol supports peer-to-peer replication of feeds, so that you can have local (and full) copies of your friends’ feeds, and update them whenever you are online. One implementation of SSB, Scuttlebot, uses Node.js and allows UI applications to interact with the local database and the network stack.

Using Scuttlebot

While SSB is being implemented in multiple languages (Go, Rust, C), its main implementation at the moment is the npm package scuttlebot and Electron desktop apps that use Scuttlebot. To build your own UI application from scratch, you can setup Scuttlebot plus a localhost HTTP server to render the UI in your browser.

Run the following npm command to add Scuttlebot to your Node.js project:

npm install --save scuttlebot

You can use Scuttlebot locally using the command line interface, to post messages, view messages, connect with friends. First, start the server:

$(npm bin)/sbot server

In another terminal you can use the server to publish a message in your local feed:

$(npm bin)/sbot publish --type post --text "Hello world"

You can also consume invite codes to connect with friends and replicate their feeds. Invite codes are generated by pub servers
owned by friends in the community, which act as mirrors of feeds in the community. Using an invite code means the server will allow you to connect to it and will mirror your data too.

$(npm bin)/sbot invite.accept $INSERT_INVITE_CODE_HERE

To create a simple web app to render your local feed, you can start the scuttlebot server in a Node.js script (with dependencies ssb-config and pull-stream), and serve the feed through an HTTP server:

// server.js
const fs = require('fs');
const http = require('http');
const pull = require('pull-stream');
const sbot = require('scuttlebot/index').call(null, require('ssb-config'));

http
  .createServer((request, response) => {
    if (request.url.endsWith('/feed')) {
      pull(
        sbot.createFeedStream({live: false, limit: 100}),
        pull.collect((err, messages) => {
          response.end(JSON.stringify(messages));
        }),
      );
    } else {
      response.end(fs.readFileSync('./index.html'));
    }
  })
  .listen(9000);

Start the server with node server.js, and upon opening localhost:9000 in your browser, it should serve the index.html:

<html>

<body>
  <script>
    fetch('/feed')
      .then(res => res.json())
      .then(messages => {
        document.body.innerHTML = `
          <h1>Feed</h1>
          <ul>${messages
            .filter(msg => msg.value.content.type === 'post')
            .map(msg =>
              `<li>${msg.value.author} said: ${msg.value.content.text}</li>`
            )
          }</ul>
        `;
      });
  </script>
</body>

</html>

Learn more

SSB applications can accomplish more than social messaging. Secure Scuttlebutt is being used for Git collaboration, chess games, and managing online gatherings.

You build your own applications on top of SSB by creating or using plug-ins for specialized APIs or different ways of querying the database. See secret-stack for details on how to build custom plugins. See flumedb for details on how to create custom indexes in the database. Also there are many useful repositories in our GitHub org.

To learn about the protocol that all of the implementations use, see the protocol guide, which explains the cryptographic primitives used, and data formats agreed on.

Finally, don’t miss the frontpage Scuttlebutt.nz, which explains the design decisions and principles we value. We highlight the important role that humans have in internet communities, which should not be delegated to computers.

Support.Mozilla.OrgState of Mozilla Support: 2018 Mid-year Update – Part 4

The San Francisco 2018 All Hands flew by and so did the last two months. I cannot tell you how grateful I am to have been able to attend this event.

If I were to look back on some of the highlights, they would be pretty nitty gritty detailed. But I will share with you a few of them.

Meeting jscher2000 in person was quite an amazing experience. He is one of the contributors around the Mozilla Support Community present for over 10 years. He is certainly a people person, cares so much about helping the user with the easiest course of action to a better Firefox experience and continues to enjoy being a part of the community. Aside from him only being able to attend one day of the week, the interactions between him and the other contributors that were invited to attend were priceless. Many conversations that you may read from behind your computer on [https://support.mozilla.org/en-US/forums/contributors this webpage] came to life during that one day.

My second favorite highlight was meeting some of the French community – Pascal and Christophe. They showed me a lot of the content that they bring to events around France and the world to talk about Firefox VR development, Firefox FR support on social networks, and many other open source projects around the French community. Did you know they have had their own forum since before SUMO? (I hear some chuckles in the background) I also learned a lot about their culture and that many of their users are regular users just like you and me. (Compared to some of the power user communities out there) It opened my eyes to the many different communities all over the internet that provide help to Firefox users.

My third and final was talking to Cynthia and Noah about the upcoming motivations for the SUMO community and the engagement in the social program. During that hour we came up with different ideas on how to engage more people in the program and some of the ideas that they wanted to see happen as more contributors joined the program. (I know we have it on a post-it somewhere.)

(Also, don’t forget some of you went to In-n-Out for the first time! And I heard that some people tried riding electric bikes across the Golden Gate bridge on a nice brisk day! I am so happy that community members had this bonding experience!)

1: 1 SUMO help right now:

If you ever get an invitation to an All Hands, please go if you can. Witnessing the CIID research drive the SUMO team with Open Innovation to help prioritize experiments and projects for the second half of this year side by side with the community was amazing.  These experiments are what is driving the community discussions right now.  I would highly recommend you check them out as a member of the Mozilla Support Community. Please subscribe to them so you never miss an update.

At the SF All Hands, decisions were also made and are currently in the works. The projects that were prioritized in these discussions help these three SUMO objectives: improve community self-sufficiency, deliver support models for new products and support channels, increase returning user satisfaction when asking for user support.

So you may see the CSAT survey on the sumo site start to work again. You may see discussions around top user issue and see more reports around product releases. You may see a second experiment around Google Play Store reviews and you may see support channels for these new products become part of the support conversations on IRC or in the forums. New contributors are being recruited to help with mobile support, as well as helping monitor new experiments on social media and in the forums. The support is expanding beyond desktop, but also focusing on satisfying users with quality answers to encourage their return and continuous use of Firefox.

Mozilla and Marketing are focused on keeping the Firefox Desktop User, and enriching new mobile use experiences and visiting other planets. SF All Hands MoCo Plenary Session – June 12, 2018 (Watch the recap to understand “the other planets” reference)

What does that mean for user support forums?

The support.mozilla.org questions forums will still be the main official place for support. Even though it is mainly in English, content translated from English articles still support the majority of users that come to the site looking for help in a different language. There are also the Spanish, Czech, Finish, Hungarian, Indonesian(Rocket etc), Portuguese, Slovenian, Sebian and Turkish questions forums, with some being more active than others. There is not too much change on that front. But did you notice the new Firefox for Enterprise forum?

Trending user issues in the forum come from internal and external discussions around Mozilla’s products. (Reddit, Facebook, Twitter, WordPress, LinkedIn to name a few) However, what is being reported on The first are the continuous reports of top user issues around each of the releases since Quantum.

The team aspires to have a frequent top issues report for each release to communicate to the Product Project Managers and to continue those conversations for more ongoing user input for product decisions. Roland’s wiki pages, the community’s reported discussion threads from each release, and filed bugs from within the community and SUMO staff all contribute to this reporting effort. THANK YOU for helping giving a voice to the user from the SUMO support channels.

Please join in the latest: discussions here as we search for long term solutions for fixing dashboards and user issue dashboards mentioned in the previous blog post.

Ongoing, the Firefox for Desktop and Mobile browsers are starting to include more Shield studies for users that have opted in to provide feedback. Telemetry data help directly influence product decisions. So if you are using it, they are keeping it.

So what is next? Well, there are new products, like Firefox Reality, a new Firefox for Android, (known as Fenix) and a number of other mobile experiences like Scout, Notes, and a DNS for HTTPs coming to the Mozilla portfolio. So expect some new support strategies and potentially new support channels.

(Did you catch the Google Play Store Review Global Sprint?  Remember that new tool?)

Does that mean there are going to be new platforms for support? No, but the current one is getting a facelift. The developer Statement of Work for the SUMO redesign was planned to get the site up to Mozilla’s global brand standards. This is way overdue, if you remember, the new design was back in July 2017, we just had a new brand announcement at the end of July as well. Check out the designs, aren’t they beautiful? https://dev.sumo.moz.works/en-US/ 

When it comes to more ”day to day” business, it’s mostly ”same old, same old”. We keep helping open source product users with the issues they encounter. We want to get more organized and more self sufficient, though. Can you imagine running the whole site on your own? ME TOO!

So what would you need a community manager for? Ideally day to day functionality on the site. Notifications, the site being up and making sure no bots or spam push the site off the network (darn DoS attacks!)

Never fear, you are not alone on this mission to support Firefox users. Isolation does not a community make. But just in case, one of the action items from our recent explorations was to have an emergency response plan for the community when something does go wrong. Remember “Contributors have ownership without agency” – that is where you come in!

What does that mean for Social Support?

In case you missed it…. See Mozilla Social Support and the next steps after the SF All Hands

Featured in our next blog post (Top issues and what an emergency response team can do about it) Want to write a blog post for SUMO? send a pm to an admin on the forums!

Mozilla Reps CommunityOnboarding team for 2nd half of 2018

This blogpost was composed by Daniele Scasciafratte

As we have entered the second half of the year, the Reps Council has worked on updating  the Onboarding Screening Team for 2018-2.

The scope of this team is to help on evaluating the new applications to the Reps program by helping the Reps Council on this process.

In June we opened a call for the new team members on https://discourse.mozilla.org/t/do-you-want-join-the-onboarding-screening-team-for-2018-2/29713

We’ve got 13 applications in total. Out of them 9 applications were fitting in the Selection Criteria defined on https://wiki.mozilla.org/ReMo/Webinar/Screening,  you can find the people that applied is on https://github.com/mozilla/Reps/issues/333.

After 2 weeks the Reps Council voted and chose the new members.

The new members of the team for the next 6 months are:

The previous team members were:

 

The previous  team worked on reviewing 17 applications in 7 rounds in the last 7 months. Thanks a lot for your hard work!

Looking into the  numbers of applications compared to the 2018-1 team (https://blog.mozilla.org/mozillareps/2018/02/15/reps-on-boarding-team/) the number of applications have declined compared to the previous year.

The new team will start to work soon (we have about 3 applications in queue). A Reps Council Member will also join the team, focusing on communications between applicants and the evaluation of the team.

If you want to congratulate your fellows Reps you can do it in this thread: https://discourse.mozilla.org/t/onboarding-screening-team-2018-2/30794

Daniel Stenbergmuch faster curl uploads on Windows with a single tiny commit

These days, operating system kernels provide TCP/IP stacks that can do really fast network transfers. It's not even unusual for ordinary people to have gigabit connections at home and of course we want our applications to be able take advantage of them.

I don't think many readers here will be surprised when I say that fulfilling this desire turns out much easier said than done in the Windows world.

Autotuning?

Since Windows 7 / 2008R2, Windows implements send buffer autotuning. Simply put, the faster transfer and longer RTT the connection has, the larger the buffer it uses (up to a max) so that more un-acked data can be outstanding and thus enable the system to saturate even really fast links.

Turns out this useful feature isn't enabled when applications use non-blocking sockets. The send buffer isn't increased at all then.

Internally, curl is using non-blocking sockets and most of the code is platform agnostic so it wouldn't be practical to switch that off for a particular system. The code is pretty much independent of the target that will run it, and now with this latest find we have also started to understand why it doesn't always perform as well on Windows as on other operating systems: the upload buffer (SO_SNDBUF) is fixed size and simply too small to perform well in a lot of cases

Applications can still enlarge the buffer, if they're aware of this bottleneck, and get better performance without having to change libcurl, but I doubt a lot of them do. And really, libcurl should perform as good as it possibly can just by itself without any necessary tuning by the application authors.

Users testing this out

Daniel Jelinski brought a fix for this that repeatedly poll Windows during uploads to ask for a suitable send buffer size and then resizes it on the go if it deems a new size is better. In order to figure out that if this patch is indeed a good idea or if there's a downside for some, we went wide and called out for users to help us.

The results were amazing. With speedups up to almost 7 times faster, exactly those newer Windows versions that supposedly have autotuning can obviously benefit substantially from this patch. The median test still performed more than twice as fast uploads with the patch. Pretty amazing really. And beyond weird that this crazy thing should be required to get ordinary sockets to perform properly on an updated operating system in 2018.

Windows XP isn't affected at all by this fix, and we've seen tests running as VirtualBox guests in NAT-mode also not gain anything, but we believe that's VirtualBox's "fault" rather than Windows or the patch.

Landing

The commit is merged into curl's master git branch and will be part of the pending curl 7.61.1 release, which is due to ship on September 5, 2018. I think it can serve as an interesting case study to see how long time it takes until Windows 10 users get their versions updated to this.

Table of test runs

The Windows versions, and the test times for the runs with the unmodified curl, the patched one, how much time the second run needed as a percentage of the first, a column with comments and last a comment showing the speedup multiple for that test.

Thank you everyone who helped us out by running these tests!

Version Time vanilla Time patched New time Comment speedup
6.0.6002 15.234 2.234 14.66% Vista SP2 6.82
6.1.7601 8.175 2.106 25.76% Windows 7 SP1 Enterprise 3.88
6.1.7601 10.109 2.621 25.93% Windows 7 Professional SP1 3.86
6.1.7601 8.125 2.203 27.11% 2008 R2 SP1 3.69
6.1.7601 8.562 2.375 27.74% 3.61
6.1.7601 9.657 2.684 27.79% 3.60
6.1.7601 11.263 3.432 30.47% Windows 2008R2 3.28
6.1.7601 5.288 1.654 31.28% 3.20
10.0.16299.309 4.281 1.484 34.66% Windows 10, 1709 2.88
10.0.17134.165 4.469 1.64 36.70% 2.73
10.0.16299.547 4.844 1.797 37.10% 2.70
10.0.14393 4.281 1.594 37.23% Windows 10, 1607 2.69
10.0.17134.165 4.547 1.703 37.45% 2.67
10.0.17134.165 4.875 1.891 38.79% 2.58
10.0.15063 4.578 1.907 41.66% 2.40
6.3.9600 4.718 2.031 43.05% Windows 8 (original) 2.32
10.0.17134.191 3.735 1.625 43.51% 2.30
10.0.17713.1002 6.062 2.656 43.81% 2.28
6.3.9600 2.921 1.297 44.40% Windows 2012R2 2.25
10.0.17134.112 5.125 2.282 44.53% 2.25
10.0.17134.191 5.593 2.719 48.61% 2.06
10.0.17134.165 5.734 2.797 48.78% run 1 2.05
10.0.14393 3.422 1.844 53.89% 1.86
10.0.17134.165 4.156 2.469 59.41% had to use the HTTPS endpoint 1.68
6.1.7601 7.082 4.945 69.82% over proxy 1.43
10.0.17134.165 5.765 4.25 73.72% run 2 1.36
5.1.2600 10.671 10.157 95.18% Windows XP Professional SP3 1.05
10.0.16299.547 1.469 1.422 96.80% in a VM runing on Linux 1.03
5.1.2600 11.297 11.046 97.78% XP 1.02
6.3.9600 5.312 5.219 98.25% 1.02
5.2.3790 5.031 5 99.38% Windows 2003 1.01
5.1.2600 7.703 7.656 99.39% XP SP3 1.01
10.0.17134.191 1.219 1.531 125.59% FTP 0.80
TOTAL 205.303 102.271 49.81% 2.01
MEDIAN 43.51% 2.30

The Rust Programming Language BlogLaunching the 2018 State of Rust Survey

It’s that time again! Time for us to take a look at how the Rust project is doing, and what we should plan for the future. The Rust Community Team is pleased to announce our 2018 State of Rust Survey! Whether or not you use Rust today, we want to know your opinions. Your responses will help the project understand its strengths and weaknesses and establish development priorities for the future.

Completing this survey should take about 10 to 15 minutes and is anonymous unless you choose to give us your contact information. We will be accepting submissions until September 8th, and we will write up our findings a month or so afterwards to blog.rust-lang.org. You can see last year’s results here.

This year, volunteers have also translated the survey into many languages! You can now take the survey in:

(If you speak multiple languages, please pick one)

Please help us spread the word by sharing the survey link on your social network feeds, at meetups, around your office, and in other communities.

If you have any questions, please see our frequently asked questions or email the Rust Community team at community-team@rust-lang.org.

Finally, we wanted to thank everyone who helped develop, polish, and test the survey!

Mitchell BakerIn Memoriam: Gervase Markham

Gerv was Mozilla’s first intern.  He arrived in the summer of 2001, when Mozilla staff was still AOL employees.  It was a shock that AOL had allocated an intern to the then-tiny Mozilla team, and we knew instantly that our amazingly effective volunteer in the UK would be our choice.

When Gerv arrived a few things about him jumped out immediately.  The first was a swollen, shiny, bright pink scar on the side of his neck.  He quickly volunteered that the scar was from a set of surgeries for his recently discovered cancer.  At the time Gerv was 20 or so, and had less than a 50% chance of reaching 35.  He was remarkably upbeat.

The second thing that immediately became clear was Gerv’s faith, which was the bedrock of his response to his cancer.  As a result the scar was a visual marker that led straight to a discussion of faith. This was the organizing principle of Gerv’s life, and nearly everything he did followed from his interpretation of how he should express his faith.

Eventually Gerv felt called to live his faith by publicly judging others in politely stated but damning terms.  His contributions to expanding the Mozilla community would eventually become shadowed by behaviors that made it more difficult for people to participate.  But in 2001 all of this was far in the future.

Gerv was a wildly active and effective contributor almost from the moment he chose Mozilla as his university-era open source project.  He started as a volunteer in January 2000, doing QA for early Gecko builds in return for plushies, including an early program called the Gecko BugAThon.  (With gratitude to the Internet Archive for its work archiving digital history and making it publicly available.)

Gerv had many roles over the years, from volunteer to mostly-volunteer to part-time, to full-time, and back again.  When he went back to student life to attend Bible College, he worked a few hours a week, and many more during breaks.  In 2009 or so, he became a full time employee and remained one until early 2018 when it became clear his cancer was entering a new and final stage.

Gerv’s work varied over the years.  After his start in QA, Gerv did trademark work, a ton of FLOSS licensing work, supported Thunderbird, supported Bugzilla, Certificate Authority work, policy work and set up the MOSS grant program, to name a few areas. Gerv had a remarkable ability to get things done.  In the early years, Gerv was also an active ambassador for Mozilla, and many Mozillians found their way into the project during this period because of Gerv.

Gerv’s work life was interspersed with a series of surgeries and radiation as new tumors appeared. Gerv would methodically inform everyone he would be away for a few weeks, and we would know he had some sort of major treatment coming up.

Gerv’s default approach was to see things in binary terms — yes or no, black or white, on or off, one or zero.  Over the years I worked with him to moderate this trait so that he could better appreciate nuance and the many “gray” areas on complex topics.  Gerv challenged me, infuriated me, impressed me, enraged me, surprised me.  He developed a greater ability to work with ambiguity, which impressed me.

Gerv’s faith did not have ambiguity at least none that I ever saw.  Gerv was crisp.  He had very precise views about marriage, sex, gender and related topics.  He was adamant that his interpretation was correct, and that his interpretation should be encoded into law.  These views made their way into the Mozilla environment.  They have been traumatic and damaging, both to individuals and to Mozilla overall.

The last time I saw Gerv was at FOSDEM, Feb 3 and 4.   I had seen Gerv only a few months before in December and I was shocked at the change in those few months.  Gerv must have been feeling quite poorly, since his announcement about preparing for the end was made on Feb 16.  In many ways, FOSDEM is a fitting final event for Gerv — free software, in the heart of Europe, where impassioned volunteer communities build FLOSS projects together.

To memorialize Gerv’s passing, it is fitting that we remember all of Gerv —  the full person, good and bad, the damage and trauma he caused, as well as his many positive contributions.   Any other view is sentimental.  We should be clear-eyed, acknowledge the problems, and appreciate the positive contributions.  Gerv came to Mozilla long before we were successful or had much to offer besides our goals and our open source foundations.  As Gerv put it, he’s gone home now, leaving untold memories around the FLOSS world.

The Mozilla BlogFirefox Offers Recommendations with Latest Test Pilot Experiment: Advance

The internet today is often like being on a guided tour bus in an unfamiliar city. You end up getting off at the same places that everyone else does. While it’s convenient and doesn’t require a lot of planning, sometimes you want to get a little off the beaten path.

With the latest Firefox experiment, Advance, you can explore more of the web efficiently, with real-time recommendations based on your current page and your most recent web history.

With Advance we’re taking you back to our Firefox roots and the experience that started everyone surfing the web. That time when the World Wide Web was uncharted territory and we could freely discover new topics and ideas online. The Internet was a different place.

We wondered, is it possible to recapture that serependitious moment of discovery, that opens people’s eyes to greater awareness of the topic they were seeking? We explored the idea of a ‘forward button’ to improve the way content is discovered, and launched our Context Graph initiative. It resulted in our first Context Graph feature, Activity Stream, which initially tested in Test Pilot, and shipped in November with our new Firefox Quantum browser. With today’s Advance experiment, we hope to bring the concept of the recommender system more to life. At a point where people no longer need to go backwards in search to move forward to discover new, relevant content.

Here’s how it works:

    • Advance is a Web Extension that works by analyzing content you’re into right now in order to provide recommendations based on what you may want to “Read Next” through a sidebar in the browser.
    • Additionally, Advance shares recommendations based on your recent online history which is discoverable in the ‘For You’ section of the sidebar. The recommendations will be based on what you visited once you’ve installed the Web Extension.
    • The Advance sidebar enables discovery without disrupting workflow.
    • Recommendations are purely driven by relevance, the primary goal of this experience is to give you the best and most timely recommendations.

    Advance Recommendations listed on the left side

    For example, you’re just browsing the internet, and come across a page with a list of the hottest restaurants. Advance starts to recommend similar content around the most popular restaurants so that you can start comparing without having to do all the research on your own. These are based on the trusted sites you’ve already visited, new sites are recommended for you to explore. If there’s a recommendation you don’t want, you have the option to flag it as “Not interesting, off topic/spam, block sites,” or give direct feedback. The recommendations are personalized for you.

    Not a foodie? If you’re a sports fan, an opera fan, or into the news, Advance makes current and relevant recommendations so that it’s easy to hop off the bus to explore. Just browse the web normally and keep the sidebar open when you’re feeling adventurous.

    The Advance experiment is available for download on Test Pilot and powered by Laserlike, a machine learning startup that has built a web scale Content Search, Discovery and Personalization platform. It enables content discovery based on web browsing activity and getting diverse perspectives on any topic.  At Mozilla, we believe browser history is sensitive information and we want people to clearly understand that Laserlike will receive their web browsing history before installing the experiment. We have also included controls so that participants can pause the experiment, see what browser history Laserlike has about them, or request deletion of that information. We’re interested in seeing how our users respond to their browsers having a more active role in helping them explore the web, and we’ll experiment with different methods of providing these recommendations if we see enough interest.

    Join our Test Pilot Program

    The Test Pilot program is open to all Firefox users. Your feedback helps us test and evaluate a variety of potential Firefox features. We have more than 100,000 daily participants, and if you’re interested in helping us build the future of Firefox, visit testpilot.firefox.com to learn more.

    If you’re familiar with Test Pilot then you know all our projects are experimental, so we’ve made it easy to give us feedback or disable features at any time from testpilot.firefox.com.

    Check out the new Advance extension. We’d love to hear your feedback!

    The post Firefox Offers Recommendations with Latest Test Pilot Experiment: Advance appeared first on The Mozilla Blog.

Firefox Test PilotAdvancing the Web

The web runs on algorithms. Your search results, product recommendations, and the news you read are all customized to your interests. They are designed to increase the time you spend in front of a screen, build addiction to sites and services, and ultimately maximize the number of times you click on advertisements.

Without discounting the utility that this personalization can provide, it’s important to consider the cost: detailed portfolios of data about you are sitting on a server somewhere, waiting to be used to determine the optimum order of your social media feeds. Even if you trust that the parties collecting that data will use it responsibly, it has to live somewhere and has to be transmitted there, which makes it a juicy target for bad actors who may not act so responsibly.

At Mozilla we think the web deserves better, and we believe that we are uniquely positioned to offer you the best of both words:

Browsers could do so much more, through a better understanding of your behavior and by using the experience of people at human-scale to give you content that enriches your life, regardless of whom you know or where you live.

A number of ongoing Firefox projects attempting to provide these benefits with Mozilla’s sensibilities:

Today, I’m pleased to announce the next of these efforts: Advance, now available on Test Pilot.

Introducing Advance

Advance offers you a new type of forward button, making real-time content recommendations from elsewhere on the web.

<figcaption>Advance offers contextual recommendations based on the pages you are currently visiting.</figcaption>

Over time, these recommendations will become personalized to your interests, learning from your interactions with the experiment and the broader internet. In addition to recipes, try it out with book reviews, blog posts, and news stories. We think Advance will help you find your next favorite thing.

<figcaption>Over time, recommendations become personalized to your interests.</figcaption>

We’re launching Advance in collaboration with Laserlike, a machine learning startup that has built a web scale content search, discovery and personalization platform. They’re our trusted partners on this project, and we’re so grateful for their help in advancing our mission.

We’re trying to prove that we can use these technologies in the right way, and refuse to sacrifice user control to do so. The experiment is opt-in, and at any time you may pause its data collection. You are able to view the data that Advance has collected about you, and may request the deletion of that data at any time.

Advance is available today from Firefox Test Pilot. Try it out, and tell us what you think. You’re helping to shape the future of Firefox.


Advancing the Web was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

QMOFirefox 62 Beta 14 Testday Results

Hello Mozillians!

As you may already know, last Friday August 3rd – we held a new Testday event, for Firefox 62 Beta 14.

Thank you all for helping us make Mozilla a better place: Gabriela Montagu.

From India team: Surentharan.R.A, Amirthavenkataramani, showkath begum and R.Monisha.

From Bangladesh team: Nazir Ahmed Sabbir, Kazi Ashraf Hossain and Maruf Rahman.

Results:

– several test cases executed for Pocket, Customization and Bookmarks.

– 1 bug verified: 1441465.

Thanks for another successful testday! 🙂

This Week In RustThis Week in Rust 246

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is warp, a fast, composable web framework. Thanks to Willi Kappler for suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

165 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs

New RFCs

Upcoming Events

Online
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

We put in a lot of work to make upgrades painless; for example, we run a tool (called “crater”) before each Rust release that downloads every package on crates.io and attempts to build their code and run their tests.

Rust Blog: What is Rust 2018.

Thanks to azriel91 for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Julien VehentRunning with Runscribe Plus

WhatsApp_Image_2018-08-06_at_13.16.11.jpeg

Six months ago, I broke the bank and bought a pair of Runscribe Plus running pods. I was recovering from a nasty sprain of the deltoid ligaments acquired a year ago while running a forest trails, and was looking for ways to reduce the risk of further damage during my runs.

Also, I love gadgets, so that made for a nice excuse! :)

After reading various articles about the value of increasing step rates to decrease risk of injury, I looked into various footpod options, the leading of which is the Stryd, but wanted to monitor both feet which only the Runscribe Plus can do. So I did something I almost never do: ordering a gadget that hasn't been heavily reviewed by many others (to the exception of the5krunner).

Runscribe Plus, which I'll abbreviate RS, is a sensor that monitors your feet movement and impact while you run. It measures:

  • pronation: rolling of the foot, particularly useful to prevent sprains
  • footstrike: where you foot hits the ground, heel, middle or front
  • shock: how much force your feet hit the ground with
  • step rate
  • stride length
  • contact time
  • and a bunch of calculation based on these raw metrics
WhatsApp_Image_2018-08-06_at_13.34.27.jpeg
My RS arrived less than a week after ordering them, but I couldn't use them right away. After several hours of investigation and back and forth with the founder, Tim Clark, by email, we figured out that my pods shipped with a bogus firmware. He remotely pushed a new version, which I updated to using the android app, and the pods started working as intended.
Usability is great. RS starts recording automatically when the step rate goes above 140 (I usually run around 165spm), and also stop automatically at the end of a run. The android app then downloads running data from each pod and uploads it to the online dashboard. Both the app and the webui can be used to go through the data, and while the app is fine to visualize data, I do find the webui to be a lot more powerful and convenient to use.
Screenshot_2018-08-06_RunScribe_-_Data_Driven_Athlete.png
The cool thing about RS is being able to compare left and right foot, because each foot is measured separately. This is useful to detect and correct balance issues. In my case, I noticed after a few run that my injured foot, the left one, was a lot less powerful than the right one. It was still painful, and I simply couldn't push on it as much, and the right foot was compensating and taking a lot more shock. I tried to make a conscious effort to reduce this imbalance over the following month, and it seem to have paid off in the end.
Screenshot_2018-08-06_RunScribe_-_Data_Driven_Athlete_2.png

The RunScribe Dashboard displays shock data for each foot recorded during a 5k. The dark red line represents the right foot and is taking a lot more shock that the light red one representing the left foot.

It's possible to use the RS to measure distance, but a lot of users on the forum have been complaining about distance accuracy issues. I've run into some of those, even after calibrating the pods to my stride length over a dozen runs. I would go for a 5 miles run with my gps watch and RS would measure a distance of anything between 4 and 6 miles. RS doesn't have a GPS, so it bases those calculations on your stride length and step count. Those inaccuracies didn't really bother me, because you can always update the distance in the app or webui after the fact, which also helps train the pod, and I am more interested in other metrics anyway.
That being said, distance inaccuracy is completely gone. According to Garmin, this morning's run was 8.6 miles, which RS recorded as 8.5 miles. That's a 1% margin of error, and I honestly can't tell which one is right between RS and Garmin.
So what changed? I was previously placing the pods on the heels of my shoes but recently moved to the laces, which may have helped. I also joined the beta program to get early firmware updates, and I think Tim has been tweaking distance calculation quite a bit lately. At any rate, this is now working great.
Screenshot_2018-08-06_RunScribe_-_Data_Driven_Athlete_3.png
RS can also broadcast live metrics to your running watch, which can then be displayed on their own screen. I don't find those to be very useful, so I don't make use of it, but it does provide real-time shock and step rate and what not.

What about Power?

I'll be honest, I have no idea. Running power doesn't seem extremely useful to me, or maybe I need to spend more time studying its value. RS does expose a Power value, so if this is your thing, you may find it interesting.

Take Away

RS is definitely not for everyone. It has its rough edges and exposes data you need to spend time researching to understand and make good use off. That said, whether you're a professional athlete or, like me, just a geek who likes gadgets and data, it's a fantastic little tool to measure your progress and tweak your effort in areas you wouldn't be able to identify on your own. I like it a lot, and I think more people will adopt this type of tool in the future.
Did it help with my ankle recovery? I think so. Tracking pronation and shock metrics was useful to make sure I wasn't putting myself at risk again. The imbalance data is probably the most useful information I got out of the RS that I couldn't get before, and definitely justifies going with a system with 2 pods instead of one. And, if anything else, it helped me regain confidence in my ability to do long runs without hurting myself.
Screenshot_2018-08-06_RunScribe_-_Data_Driven_Athlete_4.png

Footstrike metrics for left and right foot recorded during a half marathon shows I mostly run on the middle and back of my feet. How to use that data is left as an exercise to the runner.

Last, but most certainly not least, Tim Clark and the Runscribe team are awesome. Even with the resources of a big shop like Garmin, it's not easy to take an experimental products through rounds of testing while maintaining a level of quality that satisfies runners accustomed to expensive running gear ($700 watches, $200 shoes, etc.). For a small team to pull this off is a real accomplishment, all while being accessible and creating a friendly community of passionate runners. It's nice to support the underdog every once in a while, even when that means having to live with minor bugs and being patient between updates.
note: this blog post was not sponsored by runscribe in any way, I paid for my own pods and have no affiliation with them, other than being a happy customer.

Mozilla ThunderbirdWhat’s New in Thunderbird 60

Thunderbird 60, the newest stable release of everyone’s favorite desktop Email client, has been released. This version of Thunderbird is packed full of great new features, fixes, and changes that improve the user experience and make for a worthwhile upgrade. I’ll highlight three of the biggest changes in Thunderbird 60 in this post. For more information on the release check out the list over on the support website and the full release notes over on our website.

Thunderbird’s Photon Look

Like Firefox, Thunderbird now has a new “Photon” look. Tabs are square, the title bar can be toggled on and off, resulting in some saved pixels so your Email can shine. There are also new light and dark themes that ship with Thunderbird by default. Additionally, there are multiple chat themes now available. WebExtension themes are now enabled in Thunderbird as well.

Thunder 57 “Photon” Visual Refresh

Thunder 60 “Photon” Visual Refresh

Also, Thunderbird has a new logo accompanying the new release! We’re very pleased with the new branding that mirrors the Quantum-y upgrade of our sister project Firefox. You can see all the branding updates on Identihub. Identihub is run by Ura Design and they have been a great design partner for Thunderbird, spearheading the logo update as well as helping out in various other ways.

New Thunderbird Logo

New Thunderbird Logo

Attachment Management Improvements

Thunderbird 60 features several improvements to attachment handling in the compose window. Attachments can now be reordered using a dialog, keyboard shortcuts, or drag and drop. The “Attach” button has been moved to the right side of the compose window, above the attachment pane. The localized access key of the attachment pane (e.g. Alt+M for English) now also works to show or hide the pane (on Mac, it’s always Ctrl+M). Hiding a non-empty attachment pane will now show a placeholder paperclip to indicate the presence of attachments and avoid sending them accidentally. The attachment pane can also be shown initially when composing a new message: Right-click on the pane header to enable this option.

Attachment Management in Thunderbird 60

Attachment Management in Thunderbird 60

 

Calendar Improvements

In this new version of Thunderbird there are various improvements to the Calendar. For instance, the calendar allows for copying, cutting or deleting a selected occurrence or the entire series for recurring events. Calendar provides an option to display locations for events in calendar day and week views. Calendar now has the ability to send meeting notifications directly instead of showing a popup. When pasting an event or task, calendar lets the user select a target calendar. Finally, email scheduling is now possible when using CalDAV servers supporting server-side scheduling.

Other Changes

Outside of the changes described above there are many other improvements and bug fixes in Thunderbird 60. To get an idea of the full scope you can check out the great list over at the Mozilla Support site or the release notes.

Lastly, you can give Thunderbird 60 a try by downloading it here. If you want to support the development of Thunderbird, please consider making a donation.

Paul BoneDisassembling JITed code in GDB

I’ve been making changes to the JIT in SpiderMonkey, and sometimes get a SEGFAULT, okay so open it in gdb, then this happens:

Thread 1 "js" received signal SIGSEGV, Segmentation fault.
0x0000129af35af5e9 in ?? ()

Not helpful, maybe there’s something in the stack?

(gdb) backtrace
#0  0x0000129af35af5e9 in  ()
#1  0x0000129af35b107d in  ()
#2  0xfff9800000000000 in  ()
#3  0xfff8800000000002 in  ()
#4  0xfff8800000000002 in  ()

Still not helpful, I’m reasonably confident the crash is in JITed code which has no debugging symbols or other info. So I don’t know what it’s actually executing when it crashed.

In case it’s not apparent, this is a short blog post where I can make notes of one way to get some more information when debugging JITed code.

First of all, those really large addresses (frames 2, 3 and 4) look suspicious. I’m not sure what causes that.

Now, I know the change I made to the JIT, so it’s likely that that’s the code that’s crashing, I just don’t know why. It would help to see what code is being executed:

(gdb) disassemble
No function contains program counter for selected frame.

What it’s trying to say, is that the current program counter at this level in the backtrace does not correspond with the C program (SpiderMonkey). Yes, unless we did a call or goto of something invalid, then we’re probably executing JITed code.

Let’s get more info:

(gdb) info registers
rax            0x7ffff54b30c0   140737308733632
rbx            0xe4e4e4e400000891       -1953184670468274031
rcx            0xc      12
rdx            0x7ffff54c1058   140737308790872
rsi            0xa      10
rdi            0x7ffff54c1040   140737308790848
rbp            0x7fffffff9438   0x7fffffff9438
rsp            0x7fffffff9418   0x7fffffff9418
r8             0x7fffffff9088   140737488326792
r9             0x8      8
r10            0x7fffffff9068   140737488326760
r11            0x7ffff5d2f128   140737317630248
r12            0x0      0
r13            0x0      0
r14            0x7ffff54a0040   140737308655680
r15            0x0      0
rip            0x129af35af5e9   0x129af35af5e9
eflags         0x10202  [ IF RF ]
cs             0x33     51
ss             0x2b     43
ds             0x0      0
es             0x0      0
fs             0x0      0
gs             0x0      0

These are the values in the CPU registers. The debugger the rip (program counter) and rsp (stack pointer) and rbp (frame pointer) registers to know what it’s executing and to read the stack, including the calls that lead to this one. We can use this too, we’re going to use rip to figure out what’s being executed, it’s current value is 0x129af35af5e9.

(gdb) dump memory code.raw 0x129af35af5e9 0x129af35af600

Then in a shell:

$ hexdump -C code.raw
00000000  83 03 01 c7 02 4b 00 00  00 e9 82 00 00 00 49 bb
|.....K........I.|
00000010  a8 ab d1 f5 ff 7f 00                              |.......|

I have asked gdb, to write the contents of memory at the instruction pointer to a file named code.raw. Note that on x86-64 you need to write at least 15 bytes, as some instructions can be that long; I have 23 bytes.

I’d normally disassemble code using the objdump program:

$ objdump -d code.raw
objdump: code.raw: File format not recognised

In this case it needs extra clues about the raw data in this file. We tell it the file format, the machine "i386" and give the disassembler more information about the machine "x86-64".

$ objdump -b binary -m i386 -M x86-64 -D code.raw

code.raw:     file format binary


Disassembly of section .data:

00000000 <.data>:
   0:   83 03 01                addl   $0x1,(%rbx)
   3:   c7 02 4b 00 00 00       movl   $0x4b,(%rdx)
   9:   e9 82 00 00 00          jmpq   0x90
   e:   49                      rex.WB
   f:   bb a8 ab d1 f5          mov    $0xf5d1aba8,%ebx
  14:   ff                      (bad)
  15:   7f 00                   jg     0x17

Yay. I can see the instruction it crashed on. Adding the number 1 to the 32-bit value stored at the address pointed to by rbx. I’d like some more context, so I have to get the instructions that lead to this. Note that after the jmpq instruction nothing makes sense, that’s okay since that jump is always taken.

(gdb) dump memory code.raw 0x2ce07c3895e6 0x2ce07c3895f7
...
$ objdump -b binary -m i386 -M x86-64 -D code.raw

code.raw:     file format binary


Disassembly of section .data:

00000000 <.data>:
   0:   49 8b 1b                mov    (%r11),%rbx
   3:   83 03 01                addl   $0x1,(%rbx)
   6:   c7 02 4b 00 00 00       movl   $0x4b,(%rdx)
   c:   e9 82 00 00 00          jmpq   0x93

When I go back three bytes I get lucky and find another valid instruction that also makes sense.

(gdb) dump memory code.raw 0x2ce07c3895e5 0x2ce07c3895f7
...
$ objdump -b binary -m i386 -M x86-64 -D code.raw

code.raw:     file format binary


Disassembly of section .data:

00000000 <.data>:
   0:   00 49 8b                add    %cl,-0x75(%rcx)
   3:   1b 83 03 01 c7 02       sbb    0x2c70103(%rbx),%eax
   9:   4b 00 00                rex.WXB add %al,(%r8)
   c:   00 e9                   add    %ch,%cl
   e:   82                      (bad)
   f:   00 00                   add    %al,(%rax)
        ...

Gibberish. Unfortunately I just have to guess which byte an instruction might begin on. Or go back byte-by-byte finding instructions that make sense. There was quiet a bit of experimentation, and a lot more gibberish until I found:

(gdb) dump memory code.raw 0x2ce07c3895dd 0x2ce07c3895f7
...
$ objdump -b binary -m i386 -M x86-64 -D code.raw

code.raw:     file format binary


Disassembly of section .data:

00000000 <.data>:
   0:   bb 28 f1 d2 f5          mov    $0xf5d2f128,%ebx
   5:   ff                      (bad)
   6:   7f 00                   jg     0x8
   8:   00 49 8b                add    %cl,-0x75(%rcx)
   b:   1b 83 03 01 c7 02       sbb    0x2c70103(%rbx),%eax
  11:   4b 00 00                rex.WXB add %al,(%r8)
  14:   00 e9                   add    %ch,%cl
  16:   82                      (bad)
  17:   00 00                   add    %al,(%rax)
        ...

This is almost correct (except for all the gibberish). But at least it starts on an instruction that kind-of makes sense with a valid-looking memory address. But wait, that instruction uses ebx a 32-bit register. Which is not what I’m expecting since the code I’m JITing works with 64-bit memory addresses. And all that gibberish could be part of a memory address, it has bytes like 0xff and 0x7f in it!

I go back one more byte:

(gdb) dump memory code.raw 0x2ce07c3895dc 0x2ce07c3895f7
...
$ objdump -b binary -m i386 -M x86-64 -D code.raw

code.raw:     file format binary


Disassembly of section .data:

00000000 <.data>:
   0:   49 bb 28 f1 d2 f5 ff    movabs $0x7ffff5d2f128,%r11
   7:   7f 00 00
   a:   49 8b 1b                mov    (%r11),%rbx
   d:   83 03 01                addl   $0x1,(%rbx)
  10:   c7 02 4b 00 00 00       movl   $0x4b,(%rdx)
  16:   e9 82 00 00 00          jmpq   0x9d

Got it. That’s a long instruction (which I’ll talk more about in my next article) Now that we have the extra byte at the beginning. x86 has prefix bytes for some instructions which can override some things about the instruction. In this case 0x49 is saying this instruction operates on 64-bit data (well 0x48 says that and +1 is part of the register address).

And there’s the bug (3rd line). I’m dereferencing this address, the one that I load into r11 once, and then again during the addl. I should only de-reference it once. The cause was that I misunderstood SpiderMonkey’s macro assembler’s mnemonics.

Update 2018-08-07

One response to this pointed out that I could have just used:

(gdb) disassemble 0x12345, +0x100

To disassemble a range of memory, and wouldn’t have had the "No function contains program counter for selected frame." error. They even suggested I could use something like:

(gdb) disassemble $rip-50, +0x100+

I’ll definitely try these next time, they might not be the exact syntax. I haven’t tested them..

Update 2018-08-18

Another tip is to use: x/20i $pc

That’s the whole command. x means that GDB should use the $pc as a memory location and not as a literal; /20i means "treat that memory location as containing instructions and show 20 of them"

You can also use this with display, like in display x/4i $pc so that every time you stepi, it will auto-print the next 4 instructions.

Chris IliasHow to add the share menu to the Firefox address bar

While working on my previous blog post, I came across another great feature you may not know about. Let’s say you use the Share menu, but opening the Page Actions menu requires too much navigation. You need quicker access!

To add an item to the address Bar, right-click on it and select Add to Address Bar.
To remove it, right-click on the item and select Remove from Address Bar.

Mozilla Addons BlogNew backend for storage.local API

To help improve Firefox performance, the backend for the storage.local API is migrating from JSON to IndexedDB. These changes will soon be enabled on Firefox Nightly and will stabilize when Firefox 63 lands in the Beta channel. If your users switch between Firefox channels using the same profile during this time, they may experience data regression in the extensions they have previously installed.

We recommend that users do not change Firefox channels between now and September 5, 2018. However, if they do and they contact you with questions about why their extensions are not behaving normally (such as losing saved options or other local data), please point them to this post for instructions on how to retrieve and re-import their extension data.

How to retrieve migrated data and re-import the extension data

Go to about:config and check the setting for extensions.webextensions.ExtensionStorageIDB.enabled. If it is set to true, the extension data has been moved to the new backend and is not directly available as a single file in the file system.

If the extension data is not available after it has been moved in the new backend, follow these steps to ask Firefox to re-import the extension data:

  1. Look up the Extension ID by going to about:debugging
  2. Navigate to your system profile directory
  3. Go to the folder called browser-extension-data
  4. Go to the folder of the Extension ID you found in about:debugging
  5. You will see a file named storage.js.migrated (or storage.js.migrated.N if the data has migrated more than once). Your data has been moved into this file.
  6. Uninstall the extension
  7. Copy the file named storage.js.migrated to a new file named storage.js in the same directory
  8. Open the browser console.
    1. You can access the browser console by going to the from [hamburger menu] → Web Developer → browser console
  9. Re-install the extension
  10. Wait for a message “Migrating storage.local data for <Extension debug name>” and “storage.local data successfully migrated to IDB Backend for <Extension debug name>” to appear in the browser console

How to address errors when re-importing migrated extension data

If you see a QuotaExceededError in the browser console during the final step in the data retrieval and re-importing process, you may have insufficient disk space. After you free  additional disk space, you maybe be able to fix this issue by following the steps outlined in the section above.

If the problem persists and the extension is using the new ExtensionStorageIDB backend, please report the issue on Bugzilla. You can see if the extension is using the ExtensionStorageIDB backend by going to about:config and seeing if extensions.webextensions.ExtensionStorageIDB.migrated.EXTENSION_ID is set to true.

Reporting issues with the storage.local API

If you are an extension developer and you encounter any issues that seem to be related to the storage.local API, please file a new issue on Bugzilla and add it as a blocker of bug 1474562 so that we can promptly investigate it.

The post New backend for storage.local API appeared first on Mozilla Add-ons Blog.

Mozilla B-TeamHappy BMO Push Day!

David LawrenceHappy BMO Push Day!

https://github.com/mozilla-bteam/bmo/tree/release-20180803.1

the following changes have been pushed to bugzilla.mozilla.org:

  • [1480583] User->match is not paging properly so all user results are not return if more than 100 users

discuss these changes on mozilla.tools.bmo.

Mozilla VR BlogThis Week in Mixed Reality: Issue 14

This Week in Mixed Reality: Issue 14

It's been another busy week in MR land for the team. We are getting really close to releasing some fun new features.

Browsers

We spent this week fixing more bugs and improving performance in Firefox Reality:

  • Completed user studies, resulting in many great recommendations for these products to address a wider base of users.
  • Nightly version number is now visible to end users in the"About Firefox Reality" button, and is automatically submitted as part of bug report when user selects "Report an issue" from within FxR
  • Focusing on bug fixes, primarily around performance and immersive mode
  • Refactor strings.xml to provide better support for l10n
  • More UI tweaks to FxR Focus Mode

Check out this video Josh did demoing Firefox Reality and showing how to install it yourself.

Social

Lots of bug fixes to Hubs and prepping to launch some new features for it.

  • Pen tool tested at meetup, working on adding drawing expirations, rate limiting, and bugfixes.
  • Design pass in prep for user studies, focus on invitations and cleaner/brighter UX for room creation and entry. Visual design for scene landing pages.

Interested in joining our public Friday stand ups? For more details, join our public WebVR Slack #social channel

Content Ecosystem

  • A-Frame support for the Oculus Go controller hasn't shipped yet, but you can test it out with these instructions.

Stick around next week for more new features and improvements!

Hacks.Mozilla.OrgThings Gateway 0.5 packed full of new features, including experimental smart assistant

The Things Gateway from Mozilla lets you directly monitor and control your home over the web, without a middleman.

Today the Mozilla IoT team is excited to announce the 0.5 release of the Things Gateway, which is packed full of new features including customisable devices, a more powerful rules engine, an interactive floorplan and an experimental smart assistant you can talk to.

Customisable Things

Custom Capabilities

A powerful new “capabilities” system means that devices are no longer restricted to a predefined set of Web Thing Types, but can be assembled from an extensible schema-based system of “capabilities” through our new schema repository.

This means that developers have much more flexibility to create weird and wacky devices, and users have more control over how the device is used. So if you have a door sensor which also happens to be a temperature sensor, a smart plug which also has a multi-colour LED ring, or a whole bunch of sensors all in one device, you’re not limited by restrictive device types.

This also provides more flexibility to developers who want to build their own web things using the Things Framework, which now also has support for Rust, MicroPython and Arduino.

Custom Icons

When a user adds a device to the gateway they can now choose what main function they want to use it for and what icon is used to represent it.

Image showing the UI for choosing capabilities from a dropdown menu

You can even upload your own custom icon if you want to.

Image showing UI for selecting an image icon for different types of things

Custom Web Interface

In addition to the built-in UI the gateway generates for devices, web things can now provide a link to a custom web interface designed specifically for any given device. This is useful for complex or unusual devices like a robot or a “pixel wall” where a custom designed UI can be much more user friendly.

Image of UI showing examples of custom web interface icons you can create

Actions & Events

In addition to properties (like “on/off”, “level” and “color”), the gateway UI can now represent actions like “fade” which are triggered with a button and can accept input via a form.

Screenshot of UI for choosing different types of actions

Screenshot of UI for defining duration and level

The UI can also display an event log for a device.

Screenshot of event log UI

Powerful Rules Engine

The rules engine now supports rules with multiple inputs and multiple outputs. Simple rules are still just as easy to create, but more advanced rules can make use of “if”, “while”, “and”, “or” and “equals” operators to create more sophisticated automations through an intuitive drag and drop interface.

It’s also now possible to set colours and strings as outputs.

Interactive Floorplan

The floorplan view is even more useful now that you can view the status of devices and even control them from directly inside the floorplan. Simply tap things to turn them on and off, or long press to get to their detail view. This provides a helpful visual overview of the status of your whole smart home.

UI showing an interactive floorplan for monitoring your smart home

Smart Assistant Experiment

A feature we’re particularly excited about is a new smart assistant you can talk to via a chat style interface, either by typing or using your voice.

You can give it commands like “Turn the kitchen light on” and it will respond to you to confirm the action. So far it can understand a basic set of commands to turn devices on and off, set levels, set colours and set colour temperatures.

Image of the foxy smart assistant and examples of voice and text interactions

The smart assistant is still very experimental so it’s currently turned off by default, but you can enable it through Settings -> Smart Assistant UI.

UI for enabling the smart assistant

Other Changes

Other new features include developer settings which allow you to view system logs and enable/disable the gateway’s SSH server so you can log in via the command line.

UI showing developer settings panel

It’s also now much easier to rename devices and you can now also add devices that require a pin number to be entered during pairing.

How to Get Involved

To try out the latest version of the gateway, download the software image from our website to use on a Raspberry Pi. If you already have a gateway set up, you should notice it automatically update itself to the 0.5 release.

As always, we welcome your contributions to our open source project. You can provide feedback and ask questions on Discourse and file bugs and send pull requests on GitHub.

Happy hacking!

Mozilla B-Teamhappy bmo push day!

happy bmo push day! This release is rather bigger than typical.

release tag

the following changes have been pushed to bugzilla.mozilla.org:

  • [1476288] Replace moz_nick with (new, revised) nick and also attempt to disallow duplicate nicks
  • [1472954] Implement one-click component watching on bug modal and component description pages
  • [1136271] Make user profile page visible to anyone for easier sharing
  • [1475593] Bugzilla Emails received when patches are attached…

View On WordPress

Mozilla Addons BlogAugust’s Featured Extensions

Firefox Logo on blue background

Pick of the Month: Privacy Possum

by cowlicks
Protect yourself against some of the sneakiest trackers out there.

“Perfect complement for your privacy.”

Featured: Textmarker

by underflyingbirches
If you do a lot of text highlighting on webpages, this is a highly customizable tool with loads of fancy features like bookmarking, shortcut commands, save options, and more.

“This is the best text marker add-on under the new Firefox platform! It’s simple but also powerful, very flexible.”

Featured: Worldwide Radio

by Oleksandr
Enjoy live radio from more 30,000 local stations around the globe.

“Love it! Works as intended and I can listen to my favorite radio station in Australia!”

Featured: Transparent Standalone Images

by Jared W
For a clearer view of digital images, this simple but unique extension renders standalone images on transparent backgrounds.

“Oh my god, thank you. I was getting so tired of the white backgrounds around standalone transparent images. Bless you, works perfectly.”

Featured: ReloadMatic: Automatic Tab Refresh

by pylo
More than just another time-controlled tab reloader, ReloadMatic offers cache control, protection against reloading a page you may be in the midst of interacting with, and other nuanced features.

“I really appreciate the time you’ve spent developing this extension because it has far more functionality than the other reloading extensions I’ve tried since moving to [Firefox] Quantum.”

If you’d like to nominate an extension for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post August’s Featured Extensions appeared first on Mozilla Add-ons Blog.

The Rust Programming Language BlogAnnouncing Rust 1.28

The Rust team is happy to announce a new version of Rust, 1.28.0. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed via rustup, getting Rust 1.28.0 is as easy as:

$ rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.28.0 on GitHub.

What’s in 1.28.0 stable

Global Allocators

Allocators are the way that programs in Rust obtain memory from the system at runtime. Previously, Rust did not allow changing the way memory is obtained, which prevented some use cases. On some platforms, this meant using jemalloc, on others, the system allocator, but there was no way for users to control this key component. With 1.28.0, the #[global_allocator] attribute is now stable, which allows Rust programs to set their allocator to the system allocator, as well as define new allocators by implementing the GlobalAlloc trait.

The default allocator for Rust programs on some platforms is jemalloc. The standard library now provides a handle to the system allocator, which can be used to switch to the system allocator when desired, by declaring a static and marking it with the #[global_allocator] attribute.

use std::alloc::System;

#[global_allocator]
static GLOBAL: System = System;

fn main() {
    let mut v = Vec::new();
    // This will allocate memory using the system allocator.
    v.push(1);
}

However, sometimes you want to define a custom allocator for a given application domain. This is also relatively easy to do by implementing the GlobalAlloc trait. You can read more about how to do this in the documentation.

Improved error message for formatting

Work on diagnostics continues, this time with an emphasis on formatting:

format!("{_foo}", _foo = 6usize);

Previously, the error message emitted here was relatively poor:

error: invalid format string: expected `'}'`, found `'_'`
  |
2 |     format!("{_foo}", _foo = 6usize);
  |             ^^^^^^^^

Now, we emit a diagnostic that tells you the specific reason the format string is invalid:

error: invalid format string: invalid argument name `_foo`
  |
2 |     let _ = format!("{_foo}", _foo = 6usize);
  |                       ^^^^ invalid argument name in format string
  |
  = note: argument names cannot start with an underscore

See the detailed release notes for more.

Library stabilizations

We’ve already mentioned the stabilization of the GlobalAlloc trait, but another important stabilization is the NonZero number types. These are wrappers around the standard unsigned integer types: NonZeroU8, NonZeroU16, NonZeroU32, NonZeroU64, NonZeroU128, and NonZeroUsize.

This allows for size optimization, for example, Option<u8> is two bytes large, but Option<NonZeroU8> is just one byte large. Note that this optimization remains even when NonZeroU8 is wrapped inside another struct; the example below illustrates that Door is still 1 byte large despite being placed inside an Option. This optimization applies to user-defined enums as well: Option is not special.

use std::mem;
use std::num::NonZeroU8;

struct Key(NonZeroU8);

struct Door {
    key: Key,
}

fn main() {
    assert_eq!(mem::size_of::<Door>(), 1);
    assert_eq!(mem::size_of::<Option<Door>>(), 1);
}

A number of other libraries have also been stabilized: you can see the more detailed release notes for full details.

Cargo features

Cargo will now no longer allow you to publish crates with build scripts that modify the src directory. The src directory in a crate should be considered to be immutable.

Contributors to 1.28.0

Many people came together to create Rust 1.28. We couldn’t have done it without all of you. Thanks!

Mozilla Security BlogSafe Harbor for Security Bug Bounty Participants

Mozilla established one of the first modern security bug bounty programs back in 2004. Since that time, much of the technology industry has followed our lead and bounty programs have become a critical tool for finding security flaws in the software we all use. But even while these programs have reached broader acceptance, the legal protections afforded to bounty program participants have failed to evolve, putting security researchers at risk and possibly stifling that research.

That is why we are announcing changes to our bounty program policies to better protect security researchers working to improve Firefox and to codify the best practices that we’ve been using.

We often hear of researchers who are concerned that companies or governments may take legal actions against them for their legitimate security research. For example, the Computer Fraud and Abuse Act (CFAA) – essentially the US anti-hacking law that criminalizes unauthorized access to computer systems – could be used to punish bounty participants testing the security of systems and software. Just the potential for legal liability might discourage important security research.

Mozilla has criticized the CFAA for being overly broad and for potentially criminalizing activity intended to improve the security of the web. The policy changes we are making today are intended to create greater clarity for our own bounty program and to remove this legal risk for researchers participating in good faith.

There are two important policy changes we are making. First, we have clarified what is in scope for our bounty program and specifically have called out that bounty participants should not access, modify, delete, or store our users’ data. This is critical because, to protect participants in our bug bounty program, we first have to define the boundaries for bug bounty eligibility.

Second, we are stating explicitly that we will not threaten or bring any legal action against anyone who makes a good faith effort to comply with our bug bounty program. That means we promise not to sue researchers under any law (including the DMCA and CFAA) or under our applicable Terms of Service and Acceptable Use Policy for their research through the bug bounty program, and we consider that security research to be “authorized” under the CFAA.

You can see the full changes we’ve made to our policies in the General Eligibility and Safe Harbor sections of our main bounty page. These changes will help researchers know what to expect from Mozilla and represent an important next step for a program we started more than a decade ago. We want to thank Amit Elazari, who brought this safe harbor issue to our attention and is working to drive change in this space, and Dropbox for the leadership it has shown through recent changes to its vulnerability disclosure policy. We hope that other bounty programs will adopt similar policies.

The post Safe Harbor for Security Bug Bounty Participants appeared first on Mozilla Security Blog.

The Mozilla BlogG20 digital process: Trust requires more transparency and inclusion

We commend the Argentine G20 Presidency for continuing to build momentum around the G20 digital process and look forward to seeing the Declaration and the progress made to that end following the Digital Ministerial on August 24.

However, we can’t ignore the lack of transparency and the step back from multistakeholder engagement that was championed under last year’s G20 Presidency by Germany. Mozilla appreciated the invitation to attend the G20-B20 workshops on July 30, which allowed for providing input into the Digital Declaration. But inviting pre-selected organisations to an unofficial side event on comparatively short notice is not sufficient for a meaningfully transparent and inclusive process.

Assuming responsibility in the digital age also means that governments have to cater to the complexities of existing and upcoming challenges by including different stakeholders for their expertise and various experiences.

We cannot reinstate trust in the development of our digital societies if we close the doors to meaningful engagement and inclusive participatory processes.

Faro Digital, ITS Rio, and Mozilla reiterate as part of a much broader coalition of 80 stakeholders from across the world that a positive, forward-looking digital agenda must support a healthy web ecosystem and put people and their individual rights first, by providing meaningful access, strong privacy and data protection rights, freedom of expression, collaborative cybersecurity, and increased competition.

Read more: https://g20openletter.org

 

Mitchell Baker, Executive Chairwoman, Mozilla

Ronaldo Lemos, Director, ITS Rio

Ezequiel Passeron, Executive Director, Faro Digital

The post G20 digital process: Trust requires more transparency and inclusion appeared first on The Mozilla Blog.

François MarierMercurial commit series in Phabricator using Arcanist

Phabricator supports multi-commit patch series, but it's not yet obvious how to do it using Mercurial. So this the "hg" equivalent of this blog post for git users.

Note that other people have written tools and plugins to do the same thing and that an official client is coming soon.

Initial setup

I'm going to assume that you've setup arcanist and gotten an account on the Mozilla Phabricator instance. If you haven't, follow this video introduction or the excellent documentation for it (Bryce also wrote additionnal instructions for Windows users).

Make a list of commits to submit

First of all, use hg histedit to make a list of the commits that are needed:

pick ee4d9e9fcbad 477986 Bug 1461515 - Split tracking annotations from tracki...
pick 5509b5db01a4 477987 Bug 1461515 - Fix and expand tracking annotation tes...
pick e40312debf76 477988 Bug 1461515 - Make TP test fail if it uses the wrong...

Create Phabricator revisions

Now, create a Phabricator revision for each commit (in order, from earliest to latest):

~/devel/mozilla-unified (annotation-list-1461515)$ hg up ee4d9e9fcbad
5 files updated, 0 files merged, 0 files removed, 0 files unresolved
(leaving bookmark annotation-list-1461515)

~/devel/mozilla-unified (ee4d9e9)$ arc diff --no-amend
Linting...
No lint engine configured for this project.
Running unit tests...
No unit test engine is configured for this project.
 SKIP STAGING  Phabricator does not support staging areas for this repository.
Created a new Differential revision:
        Revision URI: https://phabricator.services.mozilla.com/D2484

Included changes:
  M       modules/libpref/init/all.js
  M       netwerk/base/nsChannelClassifier.cpp
  M       netwerk/base/nsChannelClassifier.h
  M       toolkit/components/url-classifier/Classifier.cpp
  M       toolkit/components/url-classifier/SafeBrowsing.jsm
  M       toolkit/components/url-classifier/nsUrlClassifierDBService.cpp
  M       toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm
  M       toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html
  M       xpcom/base/ErrorList.py

~/devel/mozilla-unified (ee4d9e9)$ hg up 5509b5db01a4
3 files updated, 0 files merged, 0 files removed, 0 files unresolved

~/devel/mozilla-unified (5509b5d)$ arc diff --no-amend
Linting...
No lint engine configured for this project.
Running unit tests...
No unit test engine is configured for this project.
 SKIP STAGING  Phabricator does not support staging areas for this repository.
Created a new Differential revision:
        Revision URI: https://phabricator.services.mozilla.com/D2485

Included changes:
  M       toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm
  M       toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html
  M       toolkit/components/url-classifier/tests/mochitest/trackingRequest.html

~/devel/mozilla-unified (5509b5d)$ hg up e40312debf76
2 files updated, 0 files merged, 0 files removed, 0 files unresolved

~/devel/mozilla-unified (e40312d)$ arc diff --no-amend
Linting...
No lint engine configured for this project.
Running unit tests...
No unit test engine is configured for this project.
 SKIP STAGING  Phabricator does not support staging areas for this repository.
Created a new Differential revision:
        Revision URI: https://phabricator.services.mozilla.com/D2486

Included changes:
  M       toolkit/components/url-classifier/tests/mochitest/classifiedAnnotatedPBFrame.html
  M       toolkit/components/url-classifier/tests/mochitest/test_privatebrowsing_trackingprotection.html

Link all revisions together

In order to ensure that these commits depend on one another, click on that last phabricator.services.mozilla.com link, then click "Related Revisions" then "Edit Parent Revisions" in the right-hand side bar and then add the previous commit (D2485 in this example).

Then go to that parent revision and repeat the same steps to set D2484 as its parent.

Amend one of the commits

As it turns out my first patch wasn't perfect and I needed to amend the middle commit to fix some test failures that came up after pushing to Try. I ended up with the following commits (as viewed in hg histedit):

pick ee4d9e9fcbad 477986 Bug 1461515 - Split tracking annotations from tracki...
pick c24f4d9e75b9 477992 Bug 1461515 - Fix and expand tracking annotation tes...
pick 1840f68978a7 477993 Bug 1461515 - Make TP test fail if it uses the wrong...

which highlights that the last two commits changed and that I would have two revisions (D2485 and D2486) to update in Phabricator.

However, since the only reason why the third patch has a different commit hash is because its parent changed, theres's no need to upload it again to Phabricator. Lando doesn't care about the parent hash and relies instead on the parent revision ID. It essentially applies diffs one at a time.

The trick was to pass the --update DXXXX argument to arc diff:

~/devel/mozilla-unified (annotation-list-1461515)$ hg up c24f4d9e75b9
2 files updated, 0 files merged, 0 files removed, 0 files unresolved
(leaving bookmark annotation-list-1461515)

~/devel/mozilla-unified (c24f4d9)$ arc diff --no-amend --update D2485
Linting...
No lint engine configured for this project.
Running unit tests...
No unit test engine is configured for this project.
 SKIP STAGING  Phabricator does not support staging areas for this repository.
Updated an existing Differential revision:
        Revision URI: https://phabricator.services.mozilla.com/D2485

Included changes:
  M       browser/base/content/test/general/trackingPage.html
  M       netwerk/test/unit/test_trackingProtection_annotateChannels.js
  M       toolkit/components/antitracking/test/browser/browser_imageCache.js
  M       toolkit/components/antitracking/test/browser/browser_subResources.js
  M       toolkit/components/antitracking/test/browser/head.js
  M       toolkit/components/antitracking/test/browser/popup.html
  M       toolkit/components/antitracking/test/browser/tracker.js
  M       toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm
  M       toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html
  M       toolkit/components/url-classifier/tests/mochitest/trackingRequest.html

Note that changing the commit message will not automatically update the revision details in Phabricator. This has to be done manually in the Web UI if required.

Hacks.Mozilla.OrgIntroducing the Dweb

Introducing the Dweb

The web is the most successful programming platform in history, resulting in the largest open and accessible collection of human knowledge ever created. So yeah, it’s pretty great. But there are a set of common problems that the web is not able to address.

Have you ever…

  • Had a website or app you love get updated to a new version, and you wished to go back to the old version?
  • Tried to share a file between your phone and laptop or tv or other device while not connected to the internet? And without using a cloud service?
  • Gone to a website or service that you depend on, only to find it’s been shut down? Whether it got bought and enveloped by some internet giant, or has gone out of business, or whatever, it was critical for you and now it’s gone.

Additionally, the web is facing critical internet health issues, seemingly intractable due to the centralization of power in the hands of a few large companies who have economic interests in not solving these problems:

  • Hate speech, harassment and other attacks on social networks
  • Repeated attacks on Net Neutrality by governments and corporations
  • Mass human communications compromised and manipulated for profit or political gain
  • Censorship and whole internet shutdowns by governments

These are some of the problems and use-cases addressed by a new wave of projects, products and platforms building on or with web technologies but with a twist: They’re using decentralized or distributed network architectures instead of the centralized networks we use now, in order to let the users control their online experience without intermediaries, whether government or corporate. This new structural approach gives rise to the idea of a ‘decentralized web’, often conveniently shortened to ‘dweb’.

You can read a number of perspectives on centralization, and why it’s an important issue for us to tackle, in Mozilla’s Internet Health Report, released earlier this year.

What’s the “D” in Dweb?!

The “d” in “dweb” usually stands for either decentralized or distributed.
What is the difference between distributed vs decentralized architectures? Here’s a visual illustration:

visual representation of centralized, decentralized, and distributed networks
(Image credit: Openclipart.org, your best source for technical clip art with animals)

In centralized systems, one entity has control over the participation of all other entities. In decentralized systems, power over participation is divided between more than one entity. In distributed systems, no one entity has control over the participation of any other entity.

Examples of centralization on the web today are the domain name system (DNS), servers run by a single company, and social networks designed for controlled communication.

A few examples of decentralized or distributed projects that became household names are Napster, BitTorrent and Bitcoin.

Some of these new dweb projects are decentralizing identity and social networking. Some are building distributed services in or on top of the existing centralized web, and others are distributed application protocols or platforms that run the web stack (HTML, JavaScript and CSS) on something other than HTTP. Also, there are blockchain-based platforms that run anything as long as it can be compiled into WebAssembly.

Here We Go

Mozilla’s mission is to put users in control of their experiences online. While some of these projects and technologies turn the familiar on its head (no servers! no DNS! no HTTP(S)!), it’s important for us to explore their potential for empowerment.

This is the first post in a series. We’ll introduce projects that cover social communication, online identity, file sharing, new economic models, as well as high-level application platforms. All of this work is either decentralized or distributed, minimizing or entirely removing centralized control.

You’ll meet the people behind these projects, and learn about their values and goals, the technical architectures used, and see basic code examples of using the project or platform.

So leave your assumptions at the door, and get ready to learn what a web more fully in users’ control could look like.

Note: This post is the introduction. The following posts in the series are listed below.

This Week In RustThis Week in Rust 245

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is Taizen, a wikipedia browser for your terminal. Thanks to nasa42 for suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

158 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Online
Africa
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust is more restrictive, indeed. But only in the sense that a car with seatbelts is more restrictive than one without: both reach the same top speed, but only one of them will save you in a bad day 😊

Felix91gr on rust-users.

Thanks to Jules Kerssemakers for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Mozilla Security BlogUpdate on the Distrust of Symantec TLS Certificates

Firefox 60 (the current release) displays an “untrusted connection” error for any website using a TLS/SSL certificate issued before June 1, 2016 that chains up to a Symantec root certificate. This is part of the consensus proposal for removing trust in Symantec TLS certificates that Mozilla adopted in 2017. This proposal was also adopted by the Google Chrome team, and more recently Apple announced their plan to distrust Symantec TLS certificates. As previously stated, DigiCert’s acquisition of Symantec’s Certification Authority has not changed these plans.

In early March when we last blogged on this topic, roughly 1% of websites were broken in Firefox 60 due to the change described above. Just before the release of Firefox 60 on May 9, 2018, less than 0.15% of websites were impacted – a major improvement in just a few months’ time.

The next phase of the consensus plan is to distrust any TLS certificate that chains up to a Symantec root, regardless of when it was issued (note that there is a small exception for TLS certificates issued by a few intermediate certificates that are managed by certain companies, and this phase does not affect S/MIME certificates). This change is scheduled for Firefox 63, with the following planned release dates:

  • Beta – September 5
  • Release – October 23

We have begun to assess the impact of the upcoming change to Firefox 63. We found that 3.5% of the top 1 million websites are still using Symantec certificates that will be distrusted in September and October (sooner in Firefox Nightly)! This number represents a very significant impact to Firefox users, but it has declined by over 20% in the past two months, and as the Firefox 63 release approaches, we expect the same rapid pace of improvement that we observed with the Firefox 60 release.

We strongly encourage website operators to replace any remaining Symantec TLS certificates immediately to avoid impacting their users as these certificates become distrusted in Firefox Nightly and Beta over the next few months. This upcoming change can already be tested in Firefox Nightly by setting the security.pki.distrust_ca_policy preference to “2” via the Configuration Editor.

The post Update on the Distrust of Symantec TLS Certificates appeared first on Mozilla Security Blog.

Mozilla Open Design BlogEvolving the Firefox Brand

Say “Firefox” and most people think of a web browser on their laptop or phone, period. TL;DR, there’s more to the story now, and our branding needs to evolve.

With the rapid evolution of the internet, people need new tools to make the most of it. So Firefox is creating new types of browsers and a range of new apps and services with the internet as the platform. From easy screen-shotting and file sharing to innovative ways to access the internet using voice and virtual reality, these tools will help people be more efficient, safer, and in control of their time online. Firefox is where purpose meets performance.

Firefox Quantum Browser Icon

As an icon, that fast fox with a flaming tail doesn’t offer enough design tools to represent this entire product family. Recoloring that logo or dissecting the fox could only take us so far. We needed to start from a new place.

A team made up of product and brand designers at Mozilla has begun imagining a new system to embrace all of the Firefox products in the pipeline and those still in the minds of our Emerging Technologies group. Working across traditional silos, we’re designing a system that can guide people smoothly from our marketing to our in-product experiences.

Today, we’re sharing our two design system approaches to ask for your feedback.

 

How this works.

For those who recall the Open Design process we used to craft our Mozilla brand identity, our approach here will feel familiar:

  • We are not crowdsourcing the answer.
  • There’ll be no voting.
  • No one is being asked to design anything for free.

Living by our open-source values of transparency and participation, we’re reaching out to our community to learn what people think. You can make your views known by commenting on this blog post below.

Extreme caveat: Although the products and projects are real, these design systems are still a work of fiction. Icons are not final. Each individual icon will undergo several rounds of refinement, or may change entirely, between now and their respective product launches. Our focus at this point is on the system.

We’ll be using these criteria to evaluate the work:

  • Do these two systems still feel like Firefox?
  • How visually cohesive is each of them? Does each hold together?
  • Can the design logic of these systems stretch to embrace new products in the future?
  • Do these systems reinforce the speed, safety, reliability, wit, and innovation that Firefox stands for?
  • Do these systems suggest our position as a tech company that puts people over profit?

All the details.

The brand architecture for both systems is made up of four levels.

Each system leads with a new Firefox masterbrand icon — an umbrella under which our product lines will live.

The masterbrand icon will show up in our marketing, at events, in co-branding with partners, and in places like the Google Play store where our products can be found. Who knows? Someday this icon may be what people think of when they hear the word “Firefox.”

At the general-purpose browser level, we’re proposing to update our Firefox Quantum desktop icon. We continue to simplify and modernize this icon, and people who use Firefox tell us they love it. Firefox Developer Edition and Firefox Nightly are rendered as color variants of the Quantum icon.


Browsers with a singular focus, such as our Firefox Reality browser for VR applications and our privacy-driven Firefox Focus mobile browser, share a common design approach for their icons. These are meant to relate most directly to the master brand as peers to the Firefox Quantum browser icon.

Finally, the icons for new applications and services signal the unique function of each product. Color and graphic treatment unite them and connect them to the master brand. Each icon shape is one of a kind, allowing people to distinguish among choices seen side by side on a screen.

Still in the works are explorations of typography, graphic patterns, motion, naming, events, partnerships, and other elements of the system that, used together with consistency in the product, will form the total brand experience.

Read along as we refine our final system over the next few months. What we roll out will be based on the feedback we receive here, insights we’re gathering from formal user testing, and our product knowledge and design sensibilities.

With your input, we’ll have a final system that will make a Firefox product recognizable out in the world even if a fox is nowhere in sight. And we’ll deliver a consistent experience from an advertisement to a button on a web page. Thanks for joining us on this new journey.

Madhava Enros, Sr. Director, Firefox User Experience

Tim Murray, Creative Director, Mozilla

The post Evolving the Firefox Brand appeared first on Mozilla Open Design.

Firefox Test PilotNew Features in Screenshots

As part of our Screenshots release on July 26, 2018, we thought we’d update you on a few new features that we think you’ll find especially useful.

We shipped a simple image editor a few months ago to enable users to annotate and crop their shots. Now we are expanding the editor with three more features: undo, redo, and text.

Undo and Redo

Drawing anything freehand with a mouse can be difficult, and while the editor previously provided an undo via the reset feature, that wiped out everything since the beginning of the editing session. With the new undo feature, users now can undo a single edit action at a time. The accompanying redo feature is there when users change their minds and want to undo their undos.

Text Tool

Writing text freehand with a mouse is difficult. The ability to add text to an image, however, is a very useful annotation feature. In the latest update to the Screenshots editor, users can insert text with the new text tool. To keep things simple, it is currently limited to one line of text per edit.

Users can drag to move text to their desired location by clicking and holding their left mouse button on the outside edge of the inserted text. You can also choose the font size and font color! With the new text tool, you can create and share your own meme in just few minutes! And isn’t that why the internet exists?

<figcaption>Undo, redo, and a text tool!</figcaption>

What’s next?

We welcome your contributions and would like Screenshots to provide best user experience. If you come across any issues or have new feature requests, you can log them on Github or Bugzilla.

Screenshots was originally an experiment from the Firefox Test Pilot team. Have a look at our current experiments, and let us know what you think!


New Features in Screenshots was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.