Chris McDonaldMessage Broker: Maybe Invented Here

NIH (Not Invented Here) is a common initialism that refers to excluding solutions that the project itself did not create. For example, some game studios NIH their own game engines, where others license an existing engine. There are advantages to both and neither is always correct. For me, I have a tendency to NIH things when working in Rust. This post is to help me understand why.

When I started with Rust, I built an example bot for one of the The AI Games challenges. The focus of that project was parsing the protocol and presenting it in a usable way for folks to get started with and extend into their bot. I built my parser from scratch, at first focusing on getting it working. Then spending time looking at other parsers to see how they go faster, and what the their interfaces in Rust look like. I updated the implementation to account for this research and I learned a lot about both Rust itself and implementing low level parsers.

I did similar for another couple projects. Spending a lot of time implementing things that there were libraries to do or at least assist with. Something I’ve started to notice over time: I’m using libraries for the parts I used to NIH. Especially the more serious I am about completing the project.

In my broker I’m doing little parsing of my own. I resisted using PlainTalk for my text protocol because I didn’t want to write the parser in a few languages. I’m using JSON since most languages have an implementation already, even if it isn’t the best to type. My only libraries so far are for encoding and decoding the allowed formats automatically. This means my socket and thread handling has been all custom or built into Rust itself.

I definitely get joy out of working at those layers. Which is an easy explanation for NIHing those parts while working on a project in general. I’m also learning a lot about the design as I implement my own. But I find myself at a crossroad. Continuing to NIH the layer and spend a week on getting a workable socket, thread, and job handling story. Or I can entangle the fate of my project with the Rust community more. To explain lets talk about some pros and cons of NIH or using other’s projects.

NIHing something means you can build something custom to your situation. It often takes more up front time than using an external solution. Your project will need to bring in or build up the expertise to handle that solution. The more central to the heart of your project, the more you should NIH. If the heart of your project is learning, then it could make sense to NIH as much as possible.

Using something external means doing research into the many solutions that could fit. Narrowing down to a final few or one solutions to try. Then learning how to use that solution and adapt it to your project. Often the solution is not a perfect fit but the savings on time and required expertise can make up for it. There is an accepted risk of the external project having different goals or being discontinued.

This morning I found myself staring down the problem of reading from my sockets in the server. Wanting to be efficient on resources I didn’t want to only rely on interval polling. I started by looking in the Rust standard library for a solution. The recommendations are to create a thread per connection, use interval polling, or external libraries. Thread per connection wont work for me with my goals. The resource cost of switching between a lot of threads shadows the cost of the work you are trying to perform. I had already ruled out interval polling. A less recommended path is wrapping the lower level mechanisms yourself.

So, I started looking into more and less complete solutions to these problems. When using less complete solutions, you can glue a few together. Creating a normalized interface on top of them that your project can use. The more complete solutions will do that normalization for you. Often at a cost of not closely matching your needs. This brings me to what I mean by entangling my project’s fate with the Rust community more.

Tokio is a project the Rust community has started to center around for building services, protocol parsers, and other tools. Designed to handle a large part of the asynchronous layer for users. I heard about it at RustConf 2016 and read about it in This Week In Rust. My understanding stayed high level and I’ve not had any serious Rust projects to apply it. I began looking into it as a solution for my broker and was delighted. Their breakdown of this problem is similar to how I have been designing my broker already. The largest difference being their inclusion of futures.

The architecture match with Tokio, as well as the community’s energy, makes it a good choice for me. I’ll need to learn more about their framework and how to use it well as I go. But, I’m confident I’ll be able to refactor my broker to run on top of it in a day or so. Then I can get the rest of the minimal story for this message broker done this week. Once I have it doing the basics with at least the Rust driver, I’ll open source it.


The Mozilla BlogThoughts on the Latest Development in the U.S. Administration Travel Ban case

This morning, the U.S. Supreme Court decided to hear the lawfulness of the U.S. Administration’s revised Travel Ban. We’ve opposed this Executive Order from the beginning as it undermines immigration law and impedes the travel necessary for people who build, maintain, and protect the Internet to come together.

Today’s new development means that until the legal case is resolved the travel ban cannot be enforced against people from the six predominantly Muslim countries who have legitimate ties or relationships to family or business in the U.S. This includes company employees and those visiting close family members.

However, the Supreme Court departed from lower court opinions by allowing the ban to be enforced against visa applicants with no connection to the U.S.  We hope that the Government will apply this standard in a manner so that qualified visa applicants who demonstrate valid reasons for travel to the U.S. are not discriminated against, and that these decisions are reliably made to avoid the chaos that travelers, families, and business experienced earlier this year.

Ultimately, we would like the Court to hold that blanket bans targeted at people of particular religions or nationalities are unlawful under the U.S. Constitution and harmfully impact families, businesses, and the global community.  We will continue to follow this case and advocate for the free flow of information and ideas across borders, of which travel is a key part.

The post Thoughts on the Latest Development in the U.S. Administration Travel Ban case appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgOpus audio codec version 1.2 released

The Opus audio codec just got another major upgrade with the release of version 1.2 (see demo). Opus is a totally open, royalty-free, audio codec that can be used for all audio applications, from music streaming and storage to high-quality video-conferencing and VoIP. Its standardization by the Internet Engineering Task Force (IETF) in 2012 (RFC 6716) was a major victory for open standards. Opus is the default codec for WebRTC and is now included in all major web browsers.

This new release brings many speech and music quality improvements, especially at low bitrates. The result is that Opus can now push stereo music bitrates down to 32 kb/s and encode full-band speech down to 14 kb/s. All that is achieved while remaining fully compatible with RFC 6716. The new release also includes optimizations, new options, as well as many bug fixes. This demo shows a few of the upgrades that users and implementers will care about the most, including audio samples. For those who haven’t used Opus yet, now’s a good time to give it a try.

Gervase MarkhamRoot Store Policy 2.5 Published

Version 2.5 of Mozilla’s Root Store Policy has now been published. This document incorporates by reference the Common CCADB Policy 1.0.1.

With this update, we have mostly worked through the backlog of modernization proposals, and I’d call this a policy fit for a transparent, openly-run root program in 2017. That doesn’t mean that there’s not more that could be done, but we’ve come a long way from policy 2.2, which we were using until six months ago, and which hadn’t been substantively updated since 2012.

We also hope that, very soon, more root store operators will join the CCADB, which will reduce everyone’s costs and administrative burdens on all sides, and hopefully allow root programs to be more responsive to changing circumstances and requests for inclusion or change.

Alessio PlacitelliGetting Firefox data faster: the shutdown pingsender

The data our Firefox users share with us is the key to identify and fix performance issues that lead to a poor browsing experience. Collecting it is not enough if we don’t manage to receive the data in an acceptable time-frame. My esteemed colleague Chris already wrote about this a couple of times: data latency … 

Hacks.Mozilla.OrgAn inside look at Quantum DOM Scheduling

Use of multi-tab browsing is becoming heavier than ever as people spend more time on services like Facebook, Twitter, YouTube, Netflix, and Google Docs, making them a part of their daily life and work on the Internet.

Quantum DOM: Scheduling is a significant piece of Project Quantum, which focuses on making Firefox more responsive, especially when lots of tabs are open. In this article, we’ll describe problems we identified in multi-tab browsing, the solutions we figured out, the current status of Quantum DOM, and opportunities for contribution to the project.

Problem 1: Task prioritization in different categories

Since multiprocess Firefox (e10s) was first enabled in Firefox 48, web content tabs now run in separate content processes in order to reduce overcrowding of OS resources in a given process. However, after further research, we found that the task queue of the main thread in the content process was still crowded with tasks in multiple categories. The tasks in the content process can come from a number of possible sources: through IPC (interprocess communication) from the main process (e.g. for input events, network data, and vsync), directly from web pages (e.g. from setTimeout, requestIdleCallback, or postMessage), or internally in the content process (e.g. for garbage collection or telemetry tasks). For better responsiveness, we’ve learned to prioritize tasks for user inputs and vsync above tasks for requestIdleCallback and garbage collection.

Problem 2: Lack of task prioritization between tabs

Inside Firefox, tasks running in foreground and background tabs are executed in First-Come-First-Served order, in a single task queue. It is quite reasonable to prioritize the foreground tasks over than the background ones, in order to increase the responsiveness of the user experience for Firefox users.

Goals & solutions

Let’s take a look at how we approached these two scheduling challenges, breaking them into a series of actions leading to achievable goals:

  • Classify and prioritize tasks on the main thread of the content processes in 2 dimensions (categories & tab groups), to provide better responsiveness.
  • Preempt tasks that are running the background tabs if this preempting is not noticeable to the user.
  • Provide an alternative to multiple content processes (e10s multi) when fewer content processes are available due to limited resources.

Task categorization

To resolve our first problem, we divide the task queue of the main thread in the content processes into 3 prioritized queues: High (User Input and Refresh Driver), Normal (DOM Event, Networking, TimerCallback, WorkerMessage), and Low (Garbage Collection, IdleCallback). Note: The order of tasks of the same priority is kept unchanged.

Task grouping

Before describing the solution to our second problem, let’s define a TabGroup as a set of open tabs that are associated via window.opener and window.parent. In the HTML standard, this is called a unit of related browsing contexts. Tasks are isolated and cannot affect each other if they belong to different TabGroups. Task grouping ensures that tasks from the same TabGroup are run in order while allowing us to interrupt tasks from background TabGroups in order to run tasks from a foreground TabGroup.

In Firefox internals, each window/document contains a reference to the TabGroup object it belongs to, which provides a set of useful dispatch APIs. These APIs make it easier for Firefox developers to associate a task with a particular TabGroup.

How tasks are grouped inside Firefox

Here are several examples to show how we group tasks in various categories inside Firefox:

  1. Inside the implementation of window.postMessage(), an asynchronous task called PostMessageEvent will be dispatched to the task queue of the main thread:
void nsGlobalWindow::PostMessageMozOuter(...) {
  ...
  RefPtr<PostMessageEvent> event = new PostMessageEvent(...);
  NS_DispatchToCurrentThread(event);
}

With the new association of DOM windows to their TabGroups and the new dispatching API provided in TabGroup, we can now associate this task with the appropriate TabGroup and specify the TaskCategory:

void nsGlobalWindow::PostMessageMozOuter(...) {
  ...
  RefPtr<PostMessageEvent> event = new PostMessageEvent(...);
  // nsGlobalWindow::Dispatch() helps to find the TabGroup of this window for dispatching.
  Dispatch("PostMessageEvent", TaskCategory::Other, event);
}
  1. In addition to the tasks that can be associated with a TabGroup, there are several kinds of tasks inside the content process such as telemetry data collection and resource management via garbage collection, which have no relationship to any web content. Here is how garbage collection starts:
void GCTimerFired() {
  // A timer callback to start the process of Garbage Collection.
}

void nsJSContext::PokeGC(...) {
  ...
  // The callback of GCTimerFired will be invoked asynchronously by enqueuing a task
  // into the task queue of the main thread to run GCTimerFired() after timeout.
  sGCTimer->InitWithFuncCallback(GCTimerFired, ...);
}

To group tasks that have no TabGroup dependencies, a special group called SystemGroup is introduced. Then, the PokeGC() method can be revised as shown here:

void nsJSContext::PokeGC(...) {
  ...
  sGCTimer->SetEventTarget(SystemGroup::EventTargetFor(TaskCategory::GC));
  sGCTimer->InitWithFuncCallback(GCTimerFired, ...);
}

We have now grouped this GCTimerFired task to the SystemGroup with TaskCategory::GC specified. This allows the scheduler to interrupt the task to run tasks for any foreground tab.

  1. In some cases, the same task can be requested either by specific web content or by an internal Firefox script with system privileges in the content process. We’ll have to decide if the SystemGroup makes sense for a request when it is not tied to any window/document. For example, in the implementation of DNSService in the content process, an optional TabGroup-versioned event target can be provided for dispatching the result callback after the DNS query is resolved. If the optional event target is not provided, the SystemGroup event target in TaskCategory::Network is chosen. We make the assumption that the request is fired from an internal script or an internal service which has no relationship to any window/document.
nsresult ChildDNSService::AsyncResolveExtendedNative(
 const nsACString &hostname,
 nsIDNSListener *listener,
 nsIEventTarget *target_,
 nsICancelable  **result)
{
  ...
  nsCOMPtr<nsIEventTarget> target = target_;
  if (!target) {
    target = SystemGroup::EventTargetFor(TaskCategory::Network);
  }

  RefPtr<DNSRequestChild> childReq =
    new DNSRequestChild(hostname, listener, target);
  ...
  childReq->StartRequest();
  childReq.forget(result);

  return NS_OK;
}

TabGroup categories

Once the task grouping is done inside the scheduler, we assign a cooperative thread per tab group from a pool to consume the tasks inside a TabGroup. Each cooperative thread is pre-emptable by the scheduler via JS interrupt at any safe point. The main thread is then virtualized via these cooperative threads.

In this new cooperative-thread approach, we ensure that only one thread at a time can run a task. This allocates more CPU time to the foreground TabGroup and also ensures internal data correctness in Firefox, which includes many services, managers, and data designed intentionally as singleton objects.

Obstacles to task grouping and scheduling

It’s clear that the performance of Quantum-DOM scheduling is highly dependent on the work of task grouping. Ideally, we’d expect that each task should be associated with only one TabGroup. In reality, however, some tasks are designed to serve multiple TabGroups, which require refactoring in advance in order to support grouping, and not all the tasks can be grouped in time before scheduler is ready to be enabled. Hence, to enable the scheduler aggressively before all tasks are grouped, the following design is adopted to disable the preemption temporarily when an ungrouped task arrives because we never know which TabGroup this ungrouped task belongs to.

Current status of task grouping

We’d like to send thanks to the many engineers from various sub-modules including DOM, Graphic, ImageLib, Media, Layout, Network, Security, etc., who’ve helped clear these ungrouped (unlabeled) tasks according to the frequency shown in telemetry results.

The table below shows telemetry records of tasks running in the content process, providing a better picture of what Firefox is actually doing:

The good news is that over 80% of tasks (weighted with frequency) have cleared recently. However, there are still a fair amount of anonymous tasks to be cleared. Additional telemetry will help check the mean time between 2 ungrouped tasks arriving to the main thread. The larger the mean time, the more performance gain we’ll see from Quantum-DOM Scheduler.

Contribute to Quantum DOM development

As mentioned above, the more tasks are grouped (labeled), the more benefit we gain from the scheduler. If you are interested in contributing to Quantum-DOM, here are some ways you can help:

  • Pick any bug from labeling meta-bug without assignee and follow this guideline for labeling.
  • If you are not familiar with these unlabeled bugs, but you want to help on naming the tasks to reduce the anonymous tasks in the telemetry result to improve the analysis in the future, this guideline will be helpful to you. (Update: Naming anonymous tasks are going to be addressed by some automation tool in this bug.)

If you get started fixing bugs and run into issues or questions, you can usually find the Quantum DOM team in Mozilla’s #content IRC channel.

Carsten BookSheriff Survey Results

Hi,
first a super big thanks for taking part in this years Sheriff Survey – this helps us a lot !
Here are the results.
1. Overall “satisfaction” – we have asked how People rate their interaction with us (from 1 (bad) to 10 (best)
So far from all results:
3,1 % = 5
3,1 % = 7
12,5 % = 8
43,8 % = 9
37,5 % = 10
2. What can we do better  as Sheriffs?
We got a lot of Feedback thats its not easy to find out who is on “sheriffduty”. We will take steps (like adding |sheriffduty tag to irc names etc) also we have https://bugzilla.mozilla.org/show_bug.cgi?id=1144589 with the target to have that name on treeherder.
Also we try to make sure to Set Needinfo requests on Backouts.
In any case, backouts are never meant to be personal and it’s part of our job to try our best to keep our trees open for developers. We also try to provide as much information as possible in the bug for why we
backed out a change.
3. Things we can improve in general (not just sheriffs) ?
A interesting Idea in the Feedback we got was about Automation. We will follow up from the Feedback and i already filed https://bugzilla.mozilla.org/show_bug.cgi?id=1375520 for the Idea of having a “Backout Button” in Treeherder in case no Sheriff is around etc – more bugs from ideas to improve general stuff and workflows will follow.
Again thanks for taking part in the Survey and if you have questions/feedback/concerns/ideas you can of course contact me / the team at anytime !

Doug Belshaw"And she turned round to me and said..."

Star Trek - turning around

I’d always assumed that my grandmother’s use of the sentence starter in this post’s title came from her time working in factories. I imagined it being reference to someone turning around on the production line to say something bitchy or snarky. It turns out, however, that the phrase actually relates to performing a volte face. In other words it’s a criticism of someone changing their opinion in a way that others find hypocritical.

This kind of social judgement plays an important normative role in our society. It’s a delicate balance: too much of it and we feel restricted by cultural norms; not enough, and we have no common touchstones, experiences, and expectations.

I raise this as I feel we’re knee-deep in developments happening around the area that can broadly considered ‘notification literacy’. There’s an element of technical understanding involved here, but on a social level it could be construed as walking the line between hypocrisy and protecting one’s own interests.

Let’s take the example of Facebook Messenger:

Facebook Messenger

The Sending… / Sent / Delivered / Read icons serve as ambient indicators that can add value to the interaction. However, that value is only added, I’d suggest, if the people involved in the conversation know how the indicators work, and are happy to ‘play by the rules of the game’. In other words, they’re in an active, consensual conversation without an unbalanced power dynamic or strained relationship.

I choose not to use Facebook products so can’t check directly, but I wouldn’t be surprised if there’s no option to turn off this double-tick feature. As a result, users are left in a quandry: do they open a message to see a message in full (and therefore show that they’ve seen it), or do they just ignore it (and hope that the person goes away)? I’ve certainly overheard several conversations about how much of a difficult position this can be for users. Technology solves as well as causes social problems.

A more nuanced approach is demonstrated by Twitter’s introduction of the double-tick feature to their direct messaging (DM). In this case, users have the option to turn off these ‘read receipts’.

Twitter DM settings

As I have this option unchecked, people who DM me on Twitter can’t see whether or not I’ve read their message. This is important, as I have ‘opened up’ my DMs, meaning anyone on Twitter can message me. Sometimes, I legitimately ignore people’s messages after reading them in full. And because I have read receipts (‘double ticks’) turned off, they’re none the wiser.

Interestingly, some platforms have gone even further than this. Path Messenger, for example, has experimented with allowing users to share more ambient statuses:

Path Messenger

This additional ambient information can be shared at the discretion of the user. It can be very useful in situations where you know the person you’re interacting with well. In fact, as Path is designed to be used with your closest contacts, this is entirely appropriate.

I think we’re still in a transition period with social networks and norms around them. These, as with all digital literacies, are context-dependent, so what’s acceptable in one community may be very different to what’s acceptable in another. It’s going to be interesting to see how these design patterns evolve over time, and how people develop social norms to deal with them.


Comments? Questions? Write your own blog post referencing this one, or email me: hello@dynamicskillset.com

Mozilla Reps CommunityNew Council Members – Spring 2017

We are very happy to announce that our new council members are already onboarded and working on their focus areas.

We are also extremely happy with the participation we had for these elections as for the first time we had the record number of 12 nominees and 215 (75% of the body)  have voted.

 

Welcome Ankit, Daniele, Elio, Faye, and Flore, we are very excited to have you onboard.

Here are the areas that each of the new council members will work on:

  • Flore – Resources
  • Faye – Coaching
  • Ankit  -Activate
  • Elio – Communications
  • Daniele – onboarding

Of course they will also all co-work with the old council members on the program’s strategy and implementation bringing the Reps Program forward.

Also I would like to thank and send #mozlove to Adriano, Ioana, Rara and Faisal for all their hard work during their term as Reps Councils Members. Your work has been impactful and appreciated and we can’t thank you enough.

The Mozilla Reps Council is the governing body of the Mozilla Reps Program. It provides the general vision of the program and oversees day-to-day operations globally. Currently, 7 volunteers and 2 paid staff sit on the council. Find out more on the Reps wiki.

Don’t forget to congratulate the new Council members on the Discourse topic!

 

Ehsan AkhgariQuantum Flow Engineering Newsletter #14

We have about 13 more weeks before the train of Firefox 57 leaves the station.  Next week many of you will be at the upcoming work week, so I thought it may be a good time to have some retrospection over our progress so far, just so that you can get a good sense of how to extrapolate when you are planning things next week.

One difficulty with the Quantum Flow project is that since it touches many different areas of the browser, it doesn’t lend itself very easily to drawing nice charts for it.  🙂  It is hard to find one metric that all of this work fits inside, and that’s OK.  My goal this week is to highlight what we can achieve with focus in a limited amount of time, so I’ll bring a couple of examples.

This is a snapshot of our burndown chart1.  We currently have 182 closed bugs and 136 open bugs.  That’s great progress, and I’d like to thank everyone who helped with all aspects of this!

But to speak of a more direct measurement of performance, let’s look at our progress on Speedometer V2.  Today, I measured our progress so far on this benchmark by comparing Firefox 53, 54, 55.0b3 (latest beta as of this writing) and the latest Nightly, all x64 builds, on the reference hardware.  This is the result (numbers are the reported benchmark score, higher is better):

Speedometer improvements

There are also many other top level performance related projects that are ongoing and approaching final stages.  I’m really excited to see what the next few months are going to uncover for Firefox performance.

One bit of administrative note, as next week most people are able to get updates from each other face to face, I won’t send out the newsletter.  Now let’s finish with this week’s list of acknowledgements to those who helped make Firefox faster during the past week, hopefully I’m not forgetting any names!

[1] (The number of bug fixes is a weird metric to use for performance improvements, since we use bugs as a unit of work, and the performance impact of each bug can be vastly different.  But I have tried to describe the details of these bugs for the most part before so the detailed information is at least available.)

Justin DolskePhoton Engineering Newsletter #7

Lucky you, here’s Photon update #7!

Let’s start off with a fresh new video that gives an overview of what we’re doing with the Quantum and Photon projects. If you’re not already running Nightly, but are willing to live on the cutting-edge, this would be a great time to give it a spin! Get involved to help us test out everything that’s new, and experience some of these great improvements first-hand!

 

Mozilla All-Hands

Next week, everyone at Mozilla will be gathering in San Francisco for our biannual All-Hands meeting. The Photon team will be using it as a repeat of our Toronto Work Week (as covered in Photon Update #2). So we’re going to be super-busy hacking on Photon. We’ve got even more great stuff coming up, and I can’t wait to talk about it in Photon Update #8. But… The intense focus means that I might not get that update out until the following week. I think the wait will be worth it. 🙂

 

Recent Changes

Menus/structure:

 

Animation:

  • Updated arrow-panel animations are going through review this week.
  • Users on macOS will notice that panel open/close animations are much smoother, as a result of a platform fix. (You’ll see more improvements soon, from the item above, as well as another platform fix to add a beautiful background blur to the panel).
  • Work continues on animations for the downloads toolbar button, stop/reload button, and page loading indicator.

 

Preferences:

 

Visual redesign:

  • Another community contribution: Oriol removed an small, unexpected line that was appearing at the top of some windows. Thanks for the patch!
  • Firefox will now automatically enable its touch mode (which increases the size of various UI to make it more touch-friendly) when used in Windows 10 Tablet mode.
  • The dark toolbar that previously landed for Windows 10 is now coming to macOS. (This just landed, and if it sticks will be in Friday’s Nightly build.)
    Screen Shot 2017-06-22 at 4.27.25 PM

 

Onboarding:

  • The onboarding tour content has landed and been polished to match the UI spec. You can click the Fox icon in about:home to give it a try! Currently it has 5 tours for existing (non-Photon) features — Private Browsing, Add-ons, Customization, Searching, and setting your Default Browser. These are planned to ship in Firefox 56 (for users installing Firefox for the first time). Additional tours will next be implemented for Firefox 57, to introduce new Photon features to existing Firefox users.
  • The onboarding tour now has UI to allow hiding it (so users who don’t want to go through each tour step can just make it go away).
  • The Mozilla logo and onboarding icon are now shown on the correct sides for RTL languages.
  • A Sync tour and tour notifications will be landing soon.

 

Performance:

  • Places (our bookmarks and history storage system) is now initialized after first paint on startup. This helps make Firefox feel faster to launch, because the window will be shown sooner.
  • More giant patches up for review for removal of Task.jsm calls, and fixed the last blocker to starting work on removing Promise.jsm usage.
  • More awesome work on improving Talos measurements and figuring out regressions. (Particularly some issues that have been holding up animations.)
  • Florian posted in firefox-dev about the browser_startup.js test, and asked everybody to have a look at the generated list to identify low hanging fruit. This test helps us find code that is loading too early, and prevents things from regressing once we fix it.

 

Thus concludes Photon update #7. As noted above, next week is going to be a little busy, so it may be a couple of weeks until the next update.


Tarek ZiadéAdvanced Molotov example

Last week, I blogged about how to drive Firefox from a Molotov script using Arsenic.

It is pretty straightforward if you are doing some isolated interactions with Firefox and if each worker in Molotov lives its own life.

However, if you need to have several "users" (==workers in Molotov) running in a coordinated way on the same web page, it gets a little bit tricky.

Each worker is its coroutine and triggers the execution of one scenario by calling the coroutine that was decorated with @scenario.

Let's consider this simple use case: we want to run five workers in parallel that all visit the same etherpad lite page with their own Firefox instance through Arsenic.

One of them is adding some content in the pad and all the others are waiting on the page to check that it is updated with that content.

So we want four workers to wait on a condition (=pad written) before they make sure and check that they can see it.

Moreover, since Molotov can call a scenario many times in a row, we need to make sure that everything was done in the previous round before changing the pad content again. That is, four workers did check the content of the pad.

To do all that synchronization, Python's asyncio offers primitives that are similar to the one you would use with threads. asyncio.Event can be used for instance to have readers waiting for the writer and vice-versa.

In the example below, a class wraps two Events and exposes simple methods to do the syncing by making sure readers and writer are waiting for each other:

class Notifier(object):
    def __init__(self, readers=5):
        self._current = 1
        self._until = readers
        self._readers = asyncio.Event()
        self._writer = asyncio.Event()

    def _is_set(self):
        return self._current == self._until

    async def wait_for_writer(self):
        await self._writer.wait()

    async def one_read(self):
        if self._is_set():
            return
        self._current += 1
        if self._current == self._until:
            self._readers.set()

    def written(self):
        self._writer.set()

    async def wait_for_readers(self):
        await self._readers.wait()

Using this class, the writer can call written() once it has filled the pad and the readers can wait for that event by calling wait_for_writer() which blocks until the write event is set.

one_read() is then called for each read. This second event is used by the next writer to make sure it can change the pad content after every reader did read it.

So how do we use this class in a Molotov test? There are several options and the simplest one is to create one Notifier instance per run and set it in a variable:

@molotov.scenario(1)
async def example(session):
    get_var = molotov.get_var
    notifier = get_var('notifier' + str(session.step),
                       factory=Notifier)
    wid = session.worker_id

    if wid != 4:
        # I am NOT worker 4! I read the pad

        # wait for worker #4 to edit the pad
        await notifier.wait_for_writer()

        # <.. pad reading here...>

        # notify that we've read it
        await notifier.one_read()
    else:
        # I am worker 4! I write in the pad
        if session.step > 1:
            # waiting for the previous readers to have finished
            # before we start a new round
            previous_notifier = get_var('notifier' + str(session.step))
            await previous_notifier.wait_for_readers()

        # <... writes in the pad...>

        # informs that the write task was done
        notifier.written()

A lot is going on in this scenario. Let's look at each part in detail. First of all, the notifier is created as a var via set_var(). Its name contains the session step.

The step value is incremented by Molotov every time a worker is running a scenario, and we can use that value to create one distinct Notifier instance per run. It starts at 1.

Next, the session.worker_id value gives each distinct worker a unique id. If you run molotov with 5 workers, you will get values from 0 to 4.

We are making the last worker (worker id== 4) the one that will be in charge of writing in the pad.

For the other workers (=readers), they just use wait_for_writer() to sit and wait for worker 4 to write the pad. worker 4 notifies them with a call to written().

The last part of the script allows Molotov to run the script several times in a row using the same workers. When the writer starts its work, if the step value is superior to one, it means that we have already run the test at least one time.

The writer, in that case, gets back the Notifier from the previous run and verifies that all the readers did their job before changing the pad.

All of this syncing work sound complicated, but once you understand the pattern, it let you run advanced scenario in Molotov where several concurrent "users" need to collaborate.

You can find the full script at https://github.com/tarekziade/molosonic/blob/master/loadtest.py

Firefox UXLet‘s tackle the same challenge again, and again.

Actually, let’s not!

The products we build get more design attention as our Firefox UX team has grown from about 15 to 45 people. Designers can now continue to focus on their product after the initial design is finished, instead of having to move to the next project. This is great as it helps us improve our products step by step. But this also leads to increasing efforts to keep this growing team in sync and able to timely answer all questions posed to us.

Scaling communication from small to big teams leads to massive effort for a few.

Especially for engineers and new designers it is often difficult to get timely answers to simple questions. Those answers are often in the original spec, which too often is hard to locate. Or worse, it may be in the mind of the designer, who may have left, or receives too many questions to respond timely.

In a survey we ran in early 2017, developers reported to feel they

  • spend too much time identifying the right specs to build from,
  • spend too much time waiting for feedback from designers, and
  • spend too much time mapping new designs to existing UI elements.

In the same survey designers reported to feel they

  • spend too much time identifying current UI to re-use in their designs, and
  • spend too much time re-building current UI to use in their designs.

All those repetitive tasks people feel they spend too much time on ultimately keep us from tackling newer and bigger challenges. ‒ So, actually, let‘s not spend our time on those.

Let’s help people spend time on what they love to do.

Shifting some communication to a central tool can reduce load on people and lower the barrier for entry.

Let’s build tools that help developers know what a given UI should look like, without them needing to wait for feedback from designers. And let’s use that system for designers to identify UI we already built, and to learn how they can re-use it.

We call this the Photon Design System,
and its first beta version is ready to be used:
design.firefox.com/photon

We are happy to receive feedback and contributions on the current content of the system, as well as on what content to add next.

Photon Design System

Based on what we learned from people, we are building our design system to help people:

  • find what they are looking for easily,
  • understand the context of that quickly, and
  • more deeply understand Firefox Design.

Currently the Photon Design System covers fundamental design elements like icons, colors, typography and copy-writing as well as our design principles and guidelines on how to design for scale. Defining those already helped designers better align across products and features, and developers have a definitive source to fall back to when a design does not specify a color, icon or other.

Growth

With all the design fundamentals in place we are starting to combine them into defined components that can easily be reused to create consistent Firefox UI across all platforms, from mobile to desktop, and from web-based to native. This will add value for people working on Firefox products, as well as help people working on extensions for Firefox.

If you are working on Firefox UI

We would love to learn from you what principles, patterns & components your team’s work touches, and what you feel is worth documenting for others to learn from, and use in their UI.

Share your principle/pattern/component with us!

And if you haven’t yet, ask yourself where you could use what’s already documented in the Photon Design System and help us find more and more synergies across our products to utilize.

If you are working on a Firefox extension

We would love to learn about where you would have wanted design support when building your extension, and when you had to spend more time on design then you intended to.

Share with us!


Let‘s tackle the same challenge again, and again. was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mozilla Open Design BlogMDN’s new design is in Beta

Change is coming to MDN. In a recent post, we talked about updates to the MDN brand, and this time we want to focus on the upcoming design changes for MDN. MDN started as a repository for all Mozilla documentation, but today MDN’s mission is to provide developers with the information they need to build things on the open Web. We want to more clearly represent that mission in the naming and branding of MDN.

New MDN logo

MDN’s switch to new branding reflects an update of Mozilla’s overall brand identity, and we are taking this opportunity to update MDN’s visual design to match Mozilla’s design language and clean new look. For MDN that means bold typography that highlights the structure of the page, more contrast, and a reduction to the essentials. Color in particular is more sparingly used, so that the code highlighting stands out.

Here’s what you can expect from the first phase:

screenshot of new MDN design

New MDN design

The core idea behind MDN’s brand identity change is that MDN is a resources for web developers. We realize that MDN is a critical resource for many web developers and we want to make sure that this update is an upgrade for all users. Instead of one big update, we will make incremental changes to the design in several phases. For the initial launch, we will focus on applying the design language to the header, footer and typography. The second phase will see changes to landing pages such as the web platform, learning area, and MDN start page. The last part of the redesign will cover the article pages themselves, and prepare us for any functional changes we’ve got coming in the future.

Today, we are launching the first phase of the redesign to our beta users. Over the next few weeks we’ll collect feedback, and fix potential issues before releasing it to all MDN users in July. Become a beta tester on MDN and be among the first to see these updates, track the progress, and provide us with feedback to make the whole thing even better for the official launch.

The post MDN’s new design is in Beta appeared first on Mozilla Open Design.

Air MozillaMozilla Gigabit Eugene Open House

Mozilla Gigabit Eugene Open House Hello Eugene, Oregon! Come meet with local innovators, educators, entrepreneurs, students, and community advocates and learn about what it means to be a “Mozilla Gigabit...

Air MozillaGigabit Community Fund June 2017 RFP Webinar

Gigabit Community Fund June 2017 RFP Webinar This summer, we're launching a new round of the Mozilla Gigabit Community Fund. We're funding projects that explore how high-speed networks can be leveraged for...

Hacks.Mozilla.OrgPowerful New Additions to the CSS Grid Inspector in Firefox Nightly

CSS Grid is revolutionizing web design. It’s a flexible, simple design standard that can be used across all browsers and devices. Designers and developers are rapidly falling in love with it and so are we. That’s why we’ve been working hard on the Firefox Developer Tools Layout panel, adding powerful upgrades to the CSS Grid Inspector and Box Model. The latest improvements are now available in Firefox Nightly.

Layout Panel Improvements

The new Layout Panel lists all the available CSS Grid containers on the page and includes an overlay to help you visualize the grid itself. Now you can customize the information displayed on the overlay, including grid line numbers and dimensions.

This is especially useful if you’re still getting to know CSS Grid and how it all works.

There’s also a new interactive grid outline in the sidebar. Mouse over the outline to highlight parts of the grid on the pages and display size, area, and position information.

The new “Display grid areas” setting shows the bounding areas and the associated area name in every cell. This feature was inspired by CSS Grid Template Builder, which was created by Anthony Dugois.

Finally, the Grid Inspector is capable of visualizing transformations applied to the grid container. This lets developers accurately see where their grid lines are on the page for any grids that are translated, skewed, rotated or scaled.

Improved Box Model Panel

We also added a Box Model Properties component that lists properties that affect the position, size and geometry of the selected element. In addition, you’ll be able to see and edit the top/left/bottom/right position and height/width properties—making live layout tweaks quick and easy.

Finally, you’ll also be able to see the offset parent for any positioned element, which is useful for quickly finding nested elements.

As always, we want to hear what you like or don’t like and how we can improve Firefox Dev Tools. Find us on Discourse or @firefoxdevtools on twitter.

Thanks to the Community

Many people were influential in shipping the CSS Layout panel in Nightly, especially the Firefox Developer Tools and Developer Relations teams. We thank them for all their contributions to making Firefox awesome.

We also got a ton of help from the amazing people in the community, and participants in programs like Undergraduate Capstone Open Source Projects (UCOSP) and Google Summer of Code (GSoC). Many thanks to all the contributors who helped land features in this release including:

Micah Tigley – Computer science student at the University of Lethbridge, Winter 2017 UCOSP student, Summer 2017 GSoC student. Micah implemented the interactive grid outline and grid area display.

Alex LockhartDalhousie University student, Winter 2017 UCOSP student. Alex contributed to the Box Model panel with the box model properties and position information.

Sheldon Roddick –  Student at Thompson Rivers University, Winter 2017 UCOSP student. Sheldon did a quick contribution to add the ability to edit the width and height in the box model.

If you’d like to become a contributor to Firefox Dev Tools hit us up on GitHub or Slack or #devtools on irc.mozilla.com. Here you will find all the resources you need to get started.

Air MozillaReps Weekly Meeting Jun. 22, 2017

Reps Weekly Meeting Jun. 22, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Dustin J. MitchellTaskcluster Manual Revamp

As the Great Taskcluster Migration draws near the finish line, we are seeing people new to Taskcluster and keen to take advantage of its new features every day. It’s exciting to build something with such expressive power: easy-to-use loaners, automatic toolchain builds, and a simple process for adding new tests, to name just a few.

We have long had a thorough reference section, with technical details of the various microservices and workers that comprise Taskcluster, but that information is a bit too deep for a newcomer. A few years ago, we introduced a tutorial to guide the beginning user to the knowledge they need for their use-case, but the tutorial only goes so far.

Daniele Procida gave a great talk at PyCon 2017 about structuring documentation, which came down to this diagram:

 Tutorials   | How-To Guides 
-------------|---------------
 Discussions | Reference     

This shows four types of documentation. The top is practical, while the bottom is more theoretical. The left side is useful for learning, while the right side is useful when trying to solve a problem. So the missing components are “discussion” and “how-to guides”. Daniele’s “discussions” means prose-style expositions of a system, organized to increase the reader’s understanding of the system as a whole.

Taskcluster has had a manual for quite a while, but it did not really live up to this promise. Instead, it was a collection of documents that didn’t fit anywhere else.

Over the last few months, we have refashioned the manual to fit this form. It now starts out with a gentle but thorough description of tasks (the core concept of Taskcluster), then explains how tasks are executed before delving into the design of the system. At the end, it includes a bunch of use-cases with advice on how to solve them, filling the “how-to guides” requirement.

If you’ve been looking to learn more about Taskcluster, check it out!

Mozilla Reps CommunityRepsNext – Status Update June 2017

In the past few months we have kept working on the implementation of our RepsNext initiative. The RepsNext initiative has started more than a year ago with the goal to bring the Mozilla Reps program to the next level. Back in January we wrote a status update. Almost half a year later, we want to provide a further update. We have also published our OKRs for the current quarter with goals to further the implementation of RepsNext.

RepsNext overview explaining what is done and what not, explained further down in text in this article.

Resources

The Resources training is finalized. It’s still a little bit text-heavy, but we want to move forward with the training and iterate based on feedback. For this, we have reached out to a few selected Reps based on the past 6 months to ask them to test the training and give initial feedback about the process and content. Once we have this feedback, we will adjust the training if needed and then open up the Resources track for applications. Applications will most probably be done in a Google Form and will include general info about the Rep as well as a free-text input field where the Rep can explain why they are fitted for the track as well as provide some links to previous, good budget requests they filed. You can learn more about the Resources Track on the Resources Wiki page.

Onboarding Process

We have simplified and streamlined the on-boarding process for new Reps. Until April we had a lot of applications that were open for more than 6 months. We are happy to report that we have started to on-board 20 new Reps between April and now. Further 10 Reps are in the administrative process of signing the agreement and creating profiles on the Portal. All of this is thanks to a new Webinar. The Webinar allows us to give Reps the very needed first information about Reps and what to expect being a Rep.

Participation Alignment

The Council is working with the Participation team in order to co-create the quarterly and yearly goals and OKRs for 2017. This happened twice already this year and we will continue to give our valuable input and feedback for the quarters to come. The program’s goals are also being created based the team’s goals and priorities. We are also attending the monthly Open Innovation Team calls. Of course this is an ongoing work that will continue. The Reps Council is also involved in strategic and operational discussions as representatives for the broader community, giving feedback on the currently ongoing strategic projects. All of this work will continue at the All Hands in San Francisco later this month.

Leadership

At the beginning of our work on RepsNext, we wanted to do a specific Leadership Track Reps can apply for as a specialization. Throughout the past months it became clear that we want all Reps to improve their leadership skills to help out other Reps as well as their communities. Therefore we created an initial list of good leadership resources for everyone to access and learn. At first this is a basic list of resources which will be improved on in the future. We want all Reps to be able to improve their leadership skills as soon as possible and later build on top of this knowledge with further resources. Please provide your feedback in the Discourse topic!

Coaching

Previously known as Regional Coaches, Community Coaches will continue to support local communities. Additionally to that we are currently creating a Coaches Training to train new Reps on coaching skills as well as existing mentors to improve their skills. These coaches will be able to coach Reps in regards to personal development. The idea is to have the Coaches Training on a self-serve basis, so everyone can take the training and complete a narrative which will be evaluated at the end to graduate from the training. This will help us to increase the quality of coaching/mentoring in the Reps program as well as in local communities. Additionally it will decrease the current bottleneck we have onboarding new Reps and we will be able to assign a coach to every Rep on a one year commitment basis with the option to switch the coach after this period. We are currently reviewing the implementation proposal so we can add the training to Teachable and publish it for all Reps.

Functional areas

We recently asked all Reps to choose their path for the future. This gives us a valuable basis to argue around functional doers in the Reps program. We will further build out the exact details about functional doers and their interest. The ongoing strategy projects will additionally give us valuable guidance in coming up with the perfect opportunities for functional doers. If you are interested in statistics about this survey, join our discussion on Discourse.

Upcoming work

We are in the last steps to finish our work on the Resources track and the Coaching training. This allows us to start talks on further improvements in the third quarter of this year. We are also going to the All Hands to discuss Reps, Strategy, Mobilizers and more with the Open Innovation team. We will update you about the outcomes of that after the All Hands.

You can follow all the Reps program’s goals and progress in the Reps Issue Tracker.

Which thoughts cross your mind upon reading this? Where would you like to help out? Let’s keep the conversation going! Join the discussion on Discourse.

Alex VincentValidating directory inputs?

A quick thought here.  I spent several hours today trying to figure out why a simple Firefox toolkit application wouldn’t work.  (I don’t know what to call “-app application.ini” applications anymore, as “XULRunner” has definitely fallen from favor…)  It took me far too long to realize that the “default” subdirectory should’ve been named “defaults” – something that I already know about these apps, but I only build them from scratch every two years or so…

Catching this sort of rookie mistake is, fundamentally, an argument validation exercise:  the main difference is instead of the argument being an object of some kind, it’s a directory on the filesystem.  If Mozilla has a module or component for validating a directory’s structure in general, I haven’t heard of it…

Which is the point of my post here.  I’m wondering what general-purpose libraries exist for validating a directory tree’s structure and contents at a basic level.  Somebody out there must have run into this problem before and created libraries for this.  I’d love to see libraries written in C++, D, Python, NodeJS and/or privileged JavaScript.  Please reply to my post if you can point me to them.  (For once, a quick search on the world’s most popular search engine fails me…)  Bonus points for libraries that allow passing in callbacks for file-specific validation. (“Is there a syntactically correct .ini file at (root)/application.ini?”)

Air MozillaCommunity Participation Guidelines Revision Brownbag (APAC)

Community Participation Guidelines Revision Brownbag (APAC) A revised version of Mozilla's Community Participation Guidelines was released in May 2017. Please join Larissa Shapiro (Head of D&I) and Lizz Noonan (D&I Coordinator)...

Air MozillaThe Joy of Coding - Episode 103

The Joy of Coding - Episode 103 mconley livehacks on real Firefox bugs while thinking aloud.

Mozilla Addons BlogUpcoming changes for add-on usage statistics

We’re changing the way we calculate add-on usage statistics on AMO so they better reflect their real-world usage. This change will go live on the site later this week.

The user count is a very important part of AMO. We show it prominently on listing and search pages. It’s a key factor of determining add-on popularity and search ranking.

Most popular add-ons on AMO

However, there are a couple of problems with it:

  • We count both enabled and disabled installs. This means some add-ons with high disable rates have a higher ranking than they should.
  • It’s an average over a period of several weeks. Add-ons that are rapidly growing in users have user numbers that are lagging behind.

We’ll be calculating the new average based on enabled installs for the past two weeks of activity. We believe this will reflect add-on usage more accurately.

What it means for add-on developers

We expect most add-ons to experience a small drop in their user numbers, due to the removal of disabled installs. Most add-on rankings on AMO won’t change significantly. This change also doesn’t affect the detailed statistics dashboard developers have access to. Only the number displayed on user-facing sections of the site will change.

If you notice any problems with the statistics or anything else on AMO, please let us know by creating an issue.

The post Upcoming changes for add-on usage statistics appeared first on Mozilla Add-ons Blog.

Hacks.Mozilla.OrgDesigning for performance: A data-informed approach for Quantum development

When we announced Project Quantum last October, we talked about how users would benefit from our focus on “performance gains…that will be so noticeable that your entire web experience will feel different.”

We shipped the first significant part of this in Firefox 53, and continue to work on the engineering side. Now let’s dive into the performance side and the work we’re doing to ensure that our users will enjoy a faster Web experience.

What makes work on performance so challenging and why is it so important to include the user from the very beginning?


Performance — a contested subject, to say the least!

Awareness of performance as a UX issue often begins with a negative experience – when things get slow or don’t work as expected. In fact, good performance is already a table stake, something that everyone expects from an online product or service. Outstanding performance will very soon become the new baseline point of reference.

The other issue is that there are different perspectives on performance. For users, performance is about their experience and is very often unspecific. For them, perception of good performance can range from “this is amazingly fast” to “SLOW!”, from “WOW!” to “NO!”. For engineers, performance is about numbers and processes. The probes that collect data in the code often measure one specific task in the pipeline. Measuring and tracking capabilities like Garbage Collection (GC) enables engineers to react to regressions in the data quickly, and work on fixing the root causes.

This is why there can be a disconnect between user experience and engineering efforts at mitigation. We measure garbage collection, but it’s often measured without context, such as whether it runs during page load, while the user interacts with a website, or during event queue idle time. Often, GC is within budget, which means that users will hardly perceive it. More generally, specific aspects of what we measure with our probes can be hard to map to the unspecific experience of performance that users have.

Defining technical and perceived performance

To describe an approach for optimizing performance for users, let us start by defining what performance means. For us, there are two sides to performance: technical performance and perceived performance.

Under technical performance, we include the things that we can measure in the browser: how long page elements take to render, how fast we can parse JavaScript or — and that is often more important to understand — how slow certain things are. Technical performance can be measured and the resulting data can be used to investigate performance issues. Technical performance represents the engineer’s viewpoint.

On the other hand, there is the topic of how users experience performance. When users talk about their browser’s performance, they talk about perceived performance or “Quality of Experience” (QoE). Users express QoE in terms of any perceivable, recognized, and nameable characteristic of the product. In the QoE theory, these are called QoE features. We may assume that these characteristics are related to factors in the product that impact technical performance, the QoE factors, but this is not necessarily given.

A promising approach to user-perceived optimization of performance is to identify those factors that have the biggest impact on QoE features and focus on optimizing their technical performance.

Understanding perception

The first step towards optimizing Quantum for perceived performance is to understand how human perception works. We won’t go into details here, but it’s important to know that there are perception thresholds of duration that we can leverage. The most prominent ones for Web interactions were defined by Jacob Nielsen back in the 1990s, and even today, they are informing user-centric performance models like RAIL. Following Nielsen’s thresholds gives a first good estimate about the budget available for certain tasks to be performed by the browser engine.

With our user research team, we are validating and investigating these perceptual thresholds for modern web content. We are running experiments with users, both in the lab and remotely. Of course, this will only happen with users’ consent and everybody will be able to opt in and opt out of these studies at any time. With tools like Shield, we run a set of experiments that allow us to learn about performance and how to improve it for users.

However, knowing the perceptual thresholds and the respective budget is just an important first step. Following, we will go a bit more into detail about how we use a data-informed approach for benchmarking and optimizing performance during the development of our new browser engine.

Three pillars of perceived Web performance

The challenge with optimizing perceived performance of a browser engine is that there are many components involved in bringing data from the network to our screens. All these components may have an impact on the perceived performance and on the underlying perceptual thresholds. However, users don’t know about this structure and the engine. From their point of view, we can define three main pillars for how users perceive performance on the Web: page load, smoothness and responsiveness.

  • Page load: This is what people notice each time when loading a new page. Users care about fast page loads, and we have seen in user research that this is often the way users determine good or bad performance in their browser. Key events defining the perceptual budget during page load are: an immediate response to the user request for a new page, also known as “First Render” or “First non-blank Paint“, and the moment when all important elements are displayed, currently discussed as Hero Element Timing.
  • Smoothness: Scrolling and panning have become challenging activities on modern websites, with infinite scrolling, parallax effects, and dynamic sticky elements. Animations create a better user experience when interacting with the page. Our users want to enjoy a smooth experience for scrolling the web and web animations, be it on social media pages or when shopping for the latest gadget. Often, people nowadays also refer to smoothness as “always 60 fps”.
  • Responsiveness: Beyond scrolling and panning, the other big group of user interactions on websites are mouse, touch, and keyboard inputs. As modern web services create a native-like experience, user expectations for web services are more demanding, based on what they have come to expect for native apps on their laptops and desktop computers. Users have become sensitive to input latency, so we are currently looking at an ideal maximum delay of 100ms.

Targeted optimization for the whole Web

But how do we optimize these three pillars for the whole of the Web? It’s a bigger job than optimizing the performance of a single web service. In building Firefox, we face the challenge of optimizing our browser engine without knowing which pages our users visit or what they do on the Web, due to our commitment to user privacy. This also limits us in collecting data for specific websites or specific user tasks. However, we want to create the best Quality of Experience for as many users and sites as possible.

To start, we decided to focus on the types of content that are currently most popular with Web users. These categories are:

  • Search (e.g.Yahoo Search, Google, Bing)
  • Productivity (e.g. Yahoo Mail, Gmail, Outlook, GSuite)
  • Social (e.g. Facebook, LinkedIn, Twitter, Reddit)
  • Media (e.g. YouTube, Netflix, SoundCloud, Amazon Video)
  • E-commerce (e.g. eBay or Amazon)
  • News & Reference (e.g. NYTimes, BBC, Wikipedia)

Our goal is to learn from this initial set of categories and the most used sites within them and extend our work on improvements to other categories over time. But how do we now match technical to perceived performance and fix technical performance issues to improve the perceived ones?

A data-informed approach to optimizing a browser engine

The goal of our approach here is to take what matters to users and apply that knowledge to achieve technical impact in the engine. With the basics defined above, our iterative approach for optimizing the engine is as follows:

  1. Identification: Based on the set of categories in focus, we specify scenarios for page load, smoothness, and responsiveness that exceed the performance budget and negatively impact perceived performance.
  2. Benchmarks: We define test cases for the identified scenarios so that they become reproducible and quantifiable in our benchmarking testbeds.
  3. Performance profiles: We record and analyze performance profiles to create a detailed view into what’s happening in the browser engine and guide engineers to identify and fix technical root causes.

Identification of scenarios exceeding performance budget

Input for identifying those scenarios come through different sources. They are either informed by results from user research or can be reported through bugs or user feedback. Here are two examples of such a scenario:

  • Scenario: browser startup
  • Category: a special case for page load
  • Performance budget: 1000ms for First Paint and 1500ms for Hero Element
  • Description: Open the browser by clicking the icon > wait for the browser to be fully loaded as maximized window
  • What to measure: First Paint: browser window appears on Desktop, Hero Element: “Search” placeholder in the search box of the content window
  • Scenario: Open chat window on Facebook
  • Category: Responsiveness
  • Performance budget: 150ms
  • Description: Log in to Facebook > Wait for the homepage to be fully loaded > click on a name in the chat panel to open chat window
  • What to measure: time from mouse-click input event to showing the chat window on screen

Benchmarks

We have built different testbeds that allow us to obtain valid and reproducible results, in order to create a baseline for each of the scenarios, and also to be able to track improvements over time. Talos is a python-driven performance testing framework that, among many other tests, has a defined set of tests for browser startup and page load. It’s been recently updated to match the new requirements and measure events closer to user perception like First Paint.

Hasal, on the other hand, focuses on benchmarks around responsiveness and smoothness. It runs a defined set of scripts that perform the defined scenarios (like the “open chat window” scenario above) and extracts the required timing data through analyzing videos captured during the interaction.

Additionally, there is still a lot of non-automated, manual testing involved, especially for first rounds of baselining new scenarios before scripting them for automated testing. Therefore, we use a HDMI capture card and analyze the recorded videos frame-by-frame manually.

All these testbeds give us data about how critical the identified scenarios are in terms of exceeding their respective perceptual budgets. Running benchmarks regularly (once a week or even more often) for critical scenarios like browser startup also tracks improvements over time and provides good direction when improvements have moved the scenario into the perceptual budget.

Performance profiles

Now that we have defined our scenarios and understand how much improvement is required to create good Quality of Experience, the last step is to enable engineers to achieve these improvements. The way that engineers look at performance problems in the browser engine is through performance profiles. Performance profiles are a snapshot of what happens in the browser engine during a specific user task such as one of our defined scenarios.

A performance profile using the Gecko Profiler. The profile shows Gecko’s main thread, four content threads, and the compositor main thread. Below is the call stack.

 

A profile consists of a timeline with tracing markers, different thread timelines and the call tree. The timeline consists of several rows that indicate interesting events in terms of tracing markers (colored segments). With the timeline, you can also zoom in to get more details for marked areas. The thread timelines show a list of profiled threads, like Gecko’s Main Thread, four content process threads (thanks to multi-process), and the main thread of the compositor process, as seen in the profile above. The x-axis is synced to the timeline above, and the y-axis shows the stack depth at a given point in time. Finally, the call tree shows the collected samples within a given timeframe organized by ‘Running Time’.

It requires some experience to be able to read these performance profiles and translate them into actions. However, because they map critical user scenarios directly to technical performance, performance profiles serve as a good tool to improve the browser engine according to what users care about. The challenge here is to identify root causes to improve performance broadly, rather than focus on specific sites and individual bugs. This is also the reason why we focus on categories of pages and not an individual set of initial websites.

For in-depth information about performance profiles, here is an article and a talk from Ehsan Akhgari about performance profiles. We are continuously working on improving the profiler addon which is now written in React/Redux.

Iterative testing and profiling performance

The initial round of baselining and profiling performance for the scenarios above can help us go from identifying user performance issues to fixing those issues in the browser engine. However, only iterative testing and profiling of performance can ensure that patches that land in the code will also lead to the expected benefits in terms of performance budget.

Additionally, iterative benchmarking will also help identify the impact that a patch has on other critical scenarios. Looking across different performance profiles and capturing comparable interactions or page load scenarios actually leads to fixing root causes. By fixing root causes rather than focusing on one-off cases, we anticipate that we will be able to improve QoE and benefit entire categories of websites and activities.

Continuous performance monitoring with Telemetry

Ultimately, we want to go beyond a specific set of web categories and look at the Web as a whole. We also want to go beyond manual testing, as this is expensive and time-consuming. And we want to apply knowledge that we have obtained from our initial data-driven approach and extend it to monitoring performance across our user base through Telemetry.

We recently added probes to our Telemetry system that will help us to track events that matter to the user, in the wild across all websites, like first non-blank paint during page load. Over time, we will extend the set of probes meaningfully. A good first attempt to define and include probes that are closer to what users perceive has been taken by the Google Chrome team and their Progressive Web Metrics.

A visualization of Progressive Web Metrics during page load and page interaction. The upper field shows the user interaction level and critical interactions related to the technical measures.

 

As mentioned in the beginning, for users performance is a table stake, something that they expect. In this article, we have explored: how we capture issues in perceived performance, how we use benchmarks to measure the criticality of performance issues, and how to fix the issue by looking at performance profiles.

Beyond the scope of the current approach to performance, there’s an even more interesting question: Will improved performance lead to more usage of the browser or changes to how users use their browser? Can performance improvements increase user engagement?

But these are topics that still need more research — and, at some point in time, will be the subject for another blog post.

Meanwhile, if you are now interested to follow along on performance improvements and experience the enhanced performance of the Firefox browser, go download and install the latest Firefox Nightly build and see what you think of its QoE.

Air MozillaCommunity Participation Guidelines Revision Brownbag (EMEA)

Community Participation Guidelines Revision Brownbag (EMEA) A revised version of Mozilla's Community Participation Guidelines was released in May 2017. Please join Larissa Shapiro (Head of D&I) and Lizz Noonan (D&I Coordinator)...

The Mozilla BlogA $2 Million Prize to Decentralize the Web. Apply Today

We’re fueling a healthy Internet by supporting big ideas that keep the web accessible, decentralized and resilient. What will you build?

 

Mozilla and the National Science Foundation are offering a $2 million prize for big ideas that decentralize the web. And we’re accepting applications starting today.

Mozilla believes the Internet is a global public resource that must be open and accessible to all.  In the 21st century, a lack of Internet access is far more than an inconvenience — it’s a staggering disadvantage. Without access, individuals miss out on substantial economic and educational opportunities, government services and the ability to communicate with friends, family and peers.

Currently, 34 million people in the U.S. — 10% of the country’s population — lack access to high-quality Internet connectivity. This number jumps to 39% in rural communities and 41% on Tribal lands. And when disasters strike, millions more can lose vital connectivity right when it’s needed most.

To connect the unconnected and disconnected across the U.S., Mozilla today is accepting applications for the Wireless Innovation for a Networked Society (WINS) challenges. Sponsored by NSF, a total of $2 million in prize money is available for wireless solutions that get people online after disasters, or that connect communities lacking reliable Internet access.

The details:

Off-the-Grid Internet Challenge

When disasters like earthquakes and hurricanes strike, communications networks are among the first pieces of critical infrastructure to overload or fail. How can we leverage both the Internet’s decentralized design and current wireless technology to keep people connected to each other — and vital messaging and mapping services — in the aftermath of a disaster?

Challenge applicants will be expected to design both the means to access the wireless network (i.e. hardware) and the applications provided on top of that network (i.e. software). Projects should be portable, easy to power and simple to access.

Here’s an example: A backpack containing a hard drive computer, battery and Wi-Fi router. The router provides access, via a Wi-Fi network, to resources on the hard drive like maps and messaging applications.

Smart Community Networks Challenge

Many communities across the U.S. lack reliable Internet access. Sometimes commercial providers don’t supply affordable access; sometimes a particular community is too isolated; sometimes the speed and quality of access is too slow. How can we leverage existing infrastructure — physical or network — to provide high-quality wireless connectivity to communities in need?

Challenge applicants should plan for a high density of users, far-reaching range and robust bandwidth. Projects should also aim to make a minimal physical footprint and uphold users’ privacy and security.

Here’s an example: A neighborhood wireless network where the nodes are housed in, and draw power from, disused phone booths or similarly underutilized infrastructure.

These challenges are open to individuals and teams, nonprofits and for-profits. Applicants could be academics, technology activists, entrepreneurs or makers. We’re welcoming anyone with big ideas and passion for a healthy Internet to apply. Prizes will be available for both early-stage design concepts and fully-working prototypes.

To learn more and apply, visit https://wirelesschallenge.mozilla.org. This challenge is one of Mozilla’s open innovation competitions, which also includes the Equal Rating Innovation Challenge.

Related Reading: Internet access is an essential part of life, but the quality of that access can vary wildly, writes Mozilla’s Executive Director Mark Surman in Quartz

The post A $2 Million Prize to Decentralize the Web. Apply Today appeared first on The Mozilla Blog.

Firefox NightlyThese Weeks in Firefox: Issue 19

Highlights

The new Photon hamburger menu

The new hamburger menu

  • We populated the onboarding overlay on about:newtab according to the UI spec. There are 5 tours with call-to-action buttons & you can hide the tour with a checkbox.
The new onboarding panel.

The new onboarding panel

Autocomplete 1st Result Time Graph

Mean!

Friends of the Firefox team

Project Updates

Add-ons

Activity Stream

  • June 15th Newsletter
  • Highlights:
    • Graduation Team has landed the about:newtab Preferences Pane, put in place a bunch of performance telemetry, and made some amazing progress on making sure all existing Firefox about:newtab tests pass.
    • The MVP Team has built out controls for collapsing/expanding AS sections, as well as landed the very cool ‘Recent Bookmarks’ section.
      • This iteration, we will be preffing on a minimal version of Activity Stream in Firefox Nightly!
    • Activity Stream Release Schedule

      The Activity Stream Release Schedule

    • Shield Study to examine AS vs. Tiles starting June 26th

Electrolysis (e10s)

  • Windows support for e10s+accessibility has been delayed until Firefox 56+ due to stability issues. Here’s the meta bug for the project. This bug appears to be the major blocker.

Firefox Core Engineering

Form Autofill

Form Autofill Demo

Note the neat form fill preview!

  • We enabled the heuristics algorithm. Form Autofill now supports forms without @autocomplete attributes, i.e. most en-US sites.
  • Enabled the auto-save feature and implemented the door hanger for notifying users of this.
  • Landed preview & highlight feature. You can now preview how addresses will be filled in multiple fields.
  • Landed the footer in the suggestion dropdown, which makes users easier to find the preferences page.
  • Landed basic select element support.
  • Polished the dialogs in about:preferences (two-layer in-content dialog and select element dropmark).

Photon

Performance
Structure
Animation
Visuals
Onboarding
  • The Sync tour and tour notification will land soon.
Preferences

Search

Test Pilot

  • A new version of Firefox Screenshots landed in Nightly last night with some nice bug fixes
  • Tab Center only works up to Firefox 55
  • Graduating July 5th:  Page Shot, Activity Stream
  • Graduating “soon” after July 5th:  Pulse, Tab Center
  • Keep your eyes open for three new experiments coming mid-July!

Chris McDonaldMessage Broker: Into String

Strings in most native or performance focused languages tend to present a fair amount of complexity and Rust is no exception to this. There are cases where you have a struct that needs to have a name, such as:

struct ServiceHandle {
    name: String,
}

The first ServiceHandle::new() function I would typically write when first learning Rust would look like this:

fn new(name: String) -> ServiceHandle {
    // other init stuff
    ServiceHandle { name: name }
}

In the real world I generally have a String to pass to ServiceHandle::new(String) so this API works well. But when I’m writing tests for my code, I want to pass hard coded values with type &'static str. In order to do that, I have to call one of the conversion functions.

ServiceHandle::new("listener".into());
ServiceHandle::new("listener".to_owned());
ServiceHandle::new(String::from("listener"));

If I change the signature to something like:

fn new(name: &str) -> ServiceHandle {
    // other init stuff
    ServiceHandle { name: name.to_owned() }
}

Then I have to remember to prefix String with & when passing to the function. Another possibility is to from_string and from_str which is what I started to rely on next. Then I don’t have to remember as much, just use the right one for the right type.

fn from_str(name: &str) -> ServiceHandle {
    // skip init stuff, let from_string do it
    ServiceHandle::from_string(name.to_owned())
}

fn from_string(name: String) -> ServiceHandle {
    // other init stuff
    ServiceHandle { name: name }
}

This gets me into a better state where it is easy to remember what string type it takes, but it feels clumsy. Rust provides the From and Into traits that types can implement to enable more generic coding. They are as their name implies a way to automatically change types. They are “reflexive” which means that if one of them is implemented the other can use it. For example, impl From<A> for B would allow you to write a function like fn thing<T: Into<B>>(arg: T) which could be called like thing(A {}).

So the next iteration of my ServiceHandle::new() went generic:

fn new<S: Into<String>>(name: S) -> ServiceHandle {
    ServiceHandle { name: name.into() }
}

This allows calling with String, &str or several other types that can automatically be converted into a String. Making writing testing code with &'static str simple, while dynamically generated String objects still a first class citizen.


Jean-Marc ValinOpus 1.2 is out!

Opus gets another major upgrade with the release of version 1.2. This release brings quality improvements to both speech and music, while remaining fully compatible with RFC 6716. There are also optimizations, new options, as well as many bug fixes. This Opus 1.2 demo describes a few of the upgrades that users and implementers will care about the most. You can download the code from the Opus website.

Air MozillaRain of Rust -3rd online meeting

Rain of Rust -3rd online meeting This event belongs to a series of online Rust events that we run in the month of June, 2017

David BryantI think I know what you mean, Kev, and I understand, but if you ask me it is your day too.

I think I know what you mean, Kev, and I understand, but if you ask me it is your day too. Happy Father’s Day, from one father to another.

Air MozillaMartes Mozilleros, 20 Jun 2017

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

The Mozilla BlogFirefox Focus New to Android, blocks annoying ads and protects your privacy

Last year, we introduced Firefox Focus, a new browser for the iPhone and iPad, designed to be fast, simple and always private. A lot has happened since November; and more than ever before, we’re seeing consumers play an active role in trying to protect their personal data and save valuable megabytes on their data plans.

While we knew that Focus provided a useful service for those times when you want to keep your web browsing to yourself, we were floored by your response  – it’s the highest rated browser from a trusted brand for the iPhone and iPad, earning a 4.6 average rating on the App Store.

Today, I’m thrilled to announce that we’re launching our Firefox Focus mobile app for Android.

Like the iPhone and iPad version, the Android app is free of tabs and other visual clutter, and erasing your sessions is as easy as a simple tap.  Firefox Focus allows you to browse the web without being followed by tracking ads which are notoriously known for slowing down your mobile experience.  Why do we block these ad trackers? Because they not only track your behavior without your knowledge, they also slow down the web on your mobile device.

Check out this video to learn more:

 

New Features for Android

For the Android release of Firefox Focus, we added the following features:

  • Ad tracker counter – For the curious, there’s a counter to list the number of ads that are blocked per site while using the app.
  • Disable tracker blocker – For sites that are not loading correctly, you can disable the tracker blocker to quickly take care of it and get back to where you’ve left off.
  • Notification reminder – When Focus is running in the background, we’ll remind you through a notification and you can easily tap to erase your browsing history.

For Android users we also made Focus a great default browser experience. Since we support both custom tabs and the ability to disable the ad blocking as needed, it works great with apps like Facebook when you just want to read an article without being tracked. We built Focus to empower you on the mobile web, and we will continue to introduce new features that make our products even better. Thanks for using Firefox Focus for a faster and more private mobile browsing experience.

 

Firefox Focus Settings View

Firefox Focus Settings View

You can download Firefox Focus on Google Play and in the App Store.

The post Firefox Focus New to Android, blocks annoying ads and protects your privacy appeared first on The Mozilla Blog.

QMOFirefox 55 Beta 4 Testday, June 23rd

Hello Mozillians,

We are happy to let you know that Friday, June 23rd, we are organizing Firefox 55 Beta 4 Testday. We’ll be focusing our testing on the following new features: Screenshots and Simplify Page.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Firefox NightlyResolved: Fixed – A short story about a community-reported bug

Firefox Nightly users, thanks to the telemetry and crash reports they send to Mozilla, are an amazing help to Firefox developers. The aggregated data sent by our community is extremely useful and allows spotting performance or stability regressions at the earliest stages of development. It probably can’t be emphasized enough how just using Nightly is a great way to get involved in Mozilla.

That said, we have in our Nightly community people that also actively hunt and report bugs and regressions and provide us detailed feedback, usually via the opening of a bug report in Bugzilla. These people are our core community, our first line of defense against regressions, they allow us to ship faster and better software and many of them have been involved in Mozilla and Firefox for a long time.

Mozilla Suite nightly start screen

Have you filed a bug?” is something you often hear open source developers say to people that report them some anomaly or regression, and for the majority of our users, this sounds like a complicated process. Just explaining the bug they experience in terms that make the bug report actionable by the developer that will fix it is a skill in itself. And this is where our core community of power-users on Nightly shines, they have this skill.

But what if the reporter has these skills but is not comfortable communicating in English because it is not her native language? Yes, language can also be a barrier to giving feedback…

A few days ago, a mozillian from our Spanish community (web developer in an IT company in Spain), sent me an email about a regression he was experiencing at work with nightly in the last days. This is the story I want to tell because it illustrates how powerful community work in open source can be and how lucky Mozilla is to have a dedicated global community.

Fernando, or StripTM as we know him in the Mozilla Hispano community, sent me an email about a major performance regression with forms on a page they have on their intranet, clicking in a form field would freeze the browser for seconds, and he wanted to know if I had heard about it. I didn’t so I asked him if there was a way I could see this page.

Intranets are tricky, the content there is by definition not public. But StripTM being a Web developer, he emailed me a reduced anonymized version of the page so as that I could test locally if I could see the bug and yes, I was experiencing it as well.

Since StripTM is not always comfortable writing bug reports in English, I did it for him and filed bug 1372843 a week ago and attached his test case. I fired up mozregression and found out that the bug was caused by the recent activation of Form Autofill in Nightly (see our article Preview Form Autofill in Firefox Nightly). In a nutshell, the performance problem was caused by the fact that this intranet page had 170(!) forms and our heuristics were cycling through all of the input fields in the page instead of only  the ones for the form we had clicked in.

All in all, it took a total of 3 days to discover the performance problem, file a bug and get a patch for it in mozilla-central. This is what happens when you can put passionate and skilled volunteers in contact with our equally passionate and skilled staff!

So thank you Fernando for using Nightly all these years and yes, the publishing date of this post is also a way for us to thank you for your involvement in Mozilla and wish you a happy birthday!

Daniel Stenbergc-ares 1.13.0

The c-ares project may not be very fancy or make a lot of noise, but it steadily moves forward and boasts an amazing 95% code coverage in the automated tests.

Today we release c-ares 1.13.0.

This time there’s basically three notable things to take home from this, apart from the 20-something bug-fixes.

CVE-2017-1000381

Due to an oversight there was an API function that we didn’t fuzz and yes, it was found out to have a security flaw. If you ask a server for a NAPTR DNS field and that response comes back crafted carefully, it could cause c-ares to access memory out of bounds.

All details for CVE-2017-1000381 on the c-ares site.

(Side-note: this is the first CVE I’ve received with a 7(!)-digit number to the right of the year.)

cmake

Now c-ares can optionally be built using cmake, in addition to the existing autotools setup.

Virtual socket IO

If you have a special setup or custom needs, c-ares now allows you to fully replace all the socket IO functions with your own custom set with ares_set_socket_functions.

This Week In RustThis Week in Rust 187

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is include_dir, a crate that lets you include entire directory contents in your binary – like include_str!, but on steroids. Thanks to Michael Bryan for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

122 pull requests were merged in the last week.

New Contributors

  • Marco Castelluccio
  • Thomas Lively
  • Wonwoo Choi

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

We're making good progress and the style is coming together. If you want to see the style in practice, check out our example or use the Integer32 Playground and select 'Proposed RFC' from the 'Format' menu. Be aware that implementation is work in progress.

Issues in final comment period:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

impl<T> Clone for T {
  fn clone(&self) -> T {
    unsafe { std::ptr::read(self) }
  }
}

@horse_rust on twitter.

Thanks to llogiq for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Mozilla Marketing Engineering & Ops BlogMozMEAO SRE Status Report - June 20, 2017

Here’s what happened on the MozMEAO SRE team from June 13th - June 20th.

Current work

Static site hosting

  • The irlpodcast site now has a staging environment also hosted in S3 with CloudFront. Additionally, Jenkins has been updated to deploy to staging and production via git push.

  • We’re going to move viewsourceconf.org from Kubernetes to S3 and CloudFront hosting. Production and staging environments have been provisioned, but we’ll need to update Jenkins to push changes to these new environments.

Basket move to Kubernetes

Kubernetes (general)

Our DataDog, New Relic and MIG DaemonSets have been configured to use Kubernetes tolerations to schedule pods on master nodes. This allows us to capture metrics from K8s master nodes in additional to worker nodes.

Frankfurt Kubernetes cluster provisioning

Work continues to enable our apps in the new Frankfurt Kubernetes cluster. In addition, we’re working on automating our app installs as must as possible.

MDN

  • ElasticSearch will be upgraded to 2.4 in SCL3 production, June 21 11 AM PST

  • We may reconsider self-hosting ElasticSearch.

Links

Daniel Stenberg“OPTIONS *” with curl

(Note: this blog post as been updated as the command line option changed after first publication, based on comments to this very post!)

curl is arguably a “Swiss army knife” of HTTP fiddling. It is one of the available tools in the toolbox with a large set of available switches and options to allow us to tweak and modify our HTTP requests to really test, debug and torture our HTTP servers and services.

That’s the way we like it.

In curl 7.55.0 it will take yet another step into this territory when we finally introduce a way for users to send “OPTION *” and similar requests to servers. It has been requested occasionally by users over the years but now the waiting is over. (brought by this commit)

“OPTIONS *” is special and peculiar just because it is one of the few specified requests you can do to a HTTP server where the path part doesn’t start with a slash. Thus you cannot really end up with this based on a URL and as you know curl is pretty much all about URLs.

The OPTIONS method was introduced in HTTP 1.1 already back in RFC 2068, published in January 1997 (even before curl was born) and with curl you’ve always been able to send an OPTIONS request with the -X option, you just were never able to send that single asterisk instead of a path.

In curl 7.55.0 and later versions, you can remove the initial slash from the path part that ends up in the request by using –request-target. So to send an OPTION * to example.com for http and https URLs, you could do it like:

$ curl --request-target "*" -X OPTIONS http://example.com
$ curl --request-target "*" -X OPTIONS https://example.com/

In classical curl-style this also opens up the opportunity for you to issue completely illegal or otherwise nonsensical paths to your server to see what it does on them, to send totally weird options to OPTIONS and similar games:

$ curl --request-target "*never*" -X OPTIONS http://example.com

$ curl --request-target "allpasswords" http://example.com

Enjoy!

Chris McDonaldMessage Broker: Goals and (De)Motivations

Recently, I read a twitter rant that described message brokers as poor combination load balancer, database, and service discovery tools. It hit me hard since I’d just spent a week diving into writing my own message broker. While I had my dislikes of brokers, I think they are handy tools. The tweeter stated that many of these things should be built into the services. The goal of which to keep the heavy work out of the center of the system. Message brokers doing the opposite when used as a central bus.

Having this description of the problem space is turning out to be nice. It gives me some different framing for the various parts of the message broker I’ll be building and the underlying needs. It also pointed out a heavy flaw that message brokers as a central bus can cause trouble in some systems. While that twitter rant dismayed me at first, I now feel even more energized in building this tool.

This framing of load balancer, database, and service discovery reminds me to go read up on that tech as well. Sourcing papers for those problems while looking into queuing related things. I can acknowledge and make sure these subproblems get solved well enough for my intended scale. That will be a key part of my design going forward, keeping my decisions favoring small to medium scale. I’ve seen message brokers work well in those scenarios and want to make an even better one of those.

This doesn’t mean one couldn’t use the broker in a larger scale operation. But, I’m architecting it to encourage deliberate clustering beyond medium scale. Clustering acknowledges the fact that there are usually groups of services that are able to meet a work request without speaking outside of their group except for one or two edges. What I hope to discover as part of the development process is how to encourage this. Whether documenting and creating examples will be enough, or if I’ll need more core features.

I think keeping the message broker light weight will be instrumental in encouraging clustering. If the message broker is heavy, folks wouldn’t want to run too many instances. If it requires a lot of tuning to be useful, folks will want to only tune it once as a central bus. Side note: as I typed this I realized this is why Redis is so good.

Among the lofty design and architecture goals I want to mention my motivations and put the goals in perspective. This project’s main goal is to be a learning project. I want to better understand the internals of message buses. Most green field backend projects will be utilizing a message bus and smaller services. Understanding the internals of the message bus and keeping them in mind will let me design better services.

I also want to build a complex, performance focused, realistic piece of software in Rust. I find the language fun to work with and writing my own thread orchestration that is safe is delightful. As I build up the basics in the broker and client, I’m learning a lot of practical Rust skills. Like many others writing and coding in Rust in their free time, I’m hoping this will help encourage more jobs writing Rust. If I’m lucky enough, I’ll get to secure one of those jobs.


Air MozillaCommunity Participation Guidelines Revision Brownbag (NALA)

Community Participation Guidelines Revision Brownbag (NALA) A revised version of Mozilla's Community Participation Guidelines was released in May 2017. Please join Larissa Shapiro (Head of D&I) and Lizz Noonan (D&I Coordinator)...

Air MozillaRep. Eshoo Net Neutrality Roundtable

Rep. Eshoo Net Neutrality Roundtable Congresswoman Anna Eshoo (D-CA) will convene a roundtable to discuss the impacts of net neutrality and the consequence of eviscerating the policy. Eshoo will hear...

Carsten BookReminder :) Please take part in the Sheriff Survey!

Hi,
just a reminder that we have our Sheriff Survey Running and please take part in it, it helps us a lot to improve our work!

Link: https://docs.google.com/a/mozilla.com/forms/d/e/1FAIpQLSfGBZ50zkG9W-Wnk1ACBfFvj1iu8e46I5gs9t-G3ZWDpcy4-A/viewform

 

thanks!

Tomcat

Cameron KaiserTenFourFox FPR1 available

TenFourFox Feature Parity Release 1 is available for testing (downloads, hashes, release notes). There are no major changes from the beta except for a couple minor efficiency updates and a font blacklist update, and all remaining applicable security issues have been backported as well.

Chris T reported that old issue 72 (a/k/a bug 641597) has resurfaced in FPR1. Most likely this bug was never actually fixed, just wallpapered over by something or other, and the efficiency improvements in FPR1 have made it easier to trigger again. That said, it has only ever manifested on certain 10.5 systems; it has never been reproduced on 10.4 by anyone, and I can't reproduce it myself on my own 10.5 DLSD PowerBook G4. For that reason I'm proceeding with the release as intended but if your system is affected, please post your steps to replicate and we'll compare them with Chris' (especially if you have a 10.4 system, since that will be much easier for me to personally debug). Please also note any haxies or system extensions as the issue can be replicated on a clean profile, meaning addons or weird settings don't appear to be a factor. If we find a fix and enough people are bitten, it should be possible to spin a point release.

The plan is for a Tuesday/Wednesday release ahead of schedule, so advise if there are any new showstoppers.

Emma IrwinEscaping the economy of souls — starting with Facebook (in 4 steps)

“I think it’s time for a reclamation movement.”

Tim Wu author of The Attention Merchant in a talk at @ Mozilla Toronto last week

A little over two months ago, I removed the web-warping, soul exploiting, goggles of a ‘free’ Facebook account — free as in guinea pig. I lost my best friend and partner to cancer around this time, and Facebook knew that.

I found myself staring at content curated just for me — a Ted Talk about end of life care, cancer foundations, hospital foundations, an ‘inspiring’ story of a boy who survived cancer, and a review of ‘Option B’, Sheryl Sandburg’s book on grief… I had joined her Facebook group, but they knew that too:

And there I was, as if waking in a horror movie finding vile tentacles of a venomous creature wrapped around me, I saw; I witnessed and felt the cost of free. The cost of my well being, of dignity and for all those around me — the cost of my attention, focus and awareness of the world around me.

Was my feed part of an experiment or just really shitty and cruel algorithms? Facebook doesn’t hide the fact it’s learning from people like me during personal crises. Rather, it publishes reports on the findings:

 

And probably what upset me the most was that Sheryl Sandburg of Facebook, whose book I liked and shared, who should be protective of people in grief was bringing large numbers of people to her Facebook group —so much heartbreak, so much trauma data. And Sheryl is aware…

“ However, the company was widely criticised for manipulating material from people’s personal lives in order to play with user emotions or make them sad.

In response on Thursday, Facebook said that it was introducing new rules for conducting research on users with clearer guidelines, better training for researchers and a stricter review process.

But, it did not state whether or not it would notify users — or seek their consent — before starting a study.”

— BBC News “Facebook admits failings over emotion manipulation study”

The reason I write this is to wake you up as well, although you are likely partially there — you need to get all the way there. Please stumble with me to some type of reclamation movement, it’s important for humanity (no exaggeration). Facebook, and others in the economy of souls design addictive technology to keep us there.

I’ve used the same excuses you are. The spine of Facebook’s business model is your contact list — and this should be the center of reclaim.

 

 

 

 

 

 

 

Below are the steps I’ve taken to wean myself off Facebook and my contact list off Facebook for good. I want an empowered online life, and I want that for you too.

Step 1 — Snap out of it!

I really hope you don’t have to lose someone close to you, or go through a trauma or tragedy to see the impact of your data being used against you. If you need inspiration watch Tim Wu’s talk and embrace the message that ‘free’ is not free. Read Facebook’s data policy, and remember they never said they would stop doing this.

Step 2— Get Facebook Messenger, Disable Facebook

Didn’t see that one coming did you? As much as Messenger annoyed me, being a separate App, what it provides is the ability to fully disable Facebook itself, but keep messaging for a transition period — which can be as long as you need it to be. You can still talk to, and share photos with grandma.

Think of Messenger as nicotine gum for FB addiction. Not great, still being tracked, but will likely get you further than cold turkey.

This one step means you you’re unplugged from:

  • Fake News
  • Like/Reactions
  • Mindless feed scrolling
  • Interacting in groups
  • Unsolicited emotional reactions to content

But keep:

  • Messaging
  • Sharing photos,
  • Group conversations

And slowly start migrating people to other tools for chat and conversation. Let them know why.

Step 3— Curate Personal Content

Even though you spent a lot of time reading content on Facebook, chances are you’ve read fake news, crappy click bait and remained in a filter bubble of your own opinions. There’s a whole world out there!

  • Subscribe(yes pay) to actual newspapers with real reporters. I now subscribe to the New York Times, and support local journalism with a subscription as well.
  • Use good tools. I like Flipboard, and organize all ‘read later’ content into Pocket, which is my goto for the times I would normally have opened Facebook. Remember we’re dealing with addiction — replace habits with new ones.
  • Watch Netflix or read a book. Step away from news and the world and escape. ‘Attention Theft’ of Facebook really makes sense to me now I realize how many extended periods of time are available to me.
  • Follow people unlike yourself on Twitter. I know Twitter has issues, but one thing at a time.

Step 4 — Influence others

I feel like a tiny drop in the ocean, but when people tell me their <insert information thing here> is on Facebook, I tell them I’m not on Facebook and so require another way. I see others doing this too. Even public pages on Facebook are not public — they’re draped in a kind of ‘free membership paywall’, that hides half the page if you’re not logged in.

Facebook groups are not good for forums, there are (much, much) better and open source forums. Suggest alternatives.

Tell people why you’re not on Facebook, but not in an arrogant kind of way — more like ‘I quit smoking because my kids need me to live’ kind of way that makes people reflect on their own health.

Public Pages are trapped in a ‘Free Membership Paywall’

Step 4 — Turn off Facebook Messenger

Turn off Facebook Messenger. I haven’t done this yet, but I am using it less and less. I probably use it 3 x a week for people I haven’t moved over to other communications yet.

Go explore the web again.

https://twitter.com/clintlalonde/status/868661574914891777

Step 1 — Snap out of it!

I really hope you don’t have to lose someone close to you, or go through a trauma or tragedy to see the impact of your data being used against you. If you need inspiration watch Tim Wu’s talk and embrace the message that ‘free’ is not free. Read Facebook’s data policy, and remember they never said they would stop doing this.

Step 2— Get Facebook Messenger, Disable Facebook

Didn’t see that one coming did you? As much as Messenger annoyed me, being a separate App, what it provides is the ability to fully disable Facebook itself, but keep messaging for a transition period — which can be as long as you need it to be. You can still talk to, and share photos with grandma.

Think of Messenger as nicotine gum for FB addiction. Not great, still being tracked, but will likely get you further than cold turkey.

This one step means you you’re unplugged from:

  • Fake News
  • Like/Reactions
  • Mindless feed scrolling
  • Interacting in groups
  • Unsolicited emotional reactions to content

But keep:

  • Messaging
  • Sharing photos,
  • Group conversations

And slowly start migrating people to other tools for chat and conversation. Let them know why.

Step 3— Curate Personal Content

Even though you spent a lot of time reading content on Facebook, chances are you’ve read fake news, crappy click bait and remained in a filter bubble of your own opinions. There’s a whole world out there!

  • Subscribe(yes pay) to actual newspapers with real reporters. I now subscribe to the New York Times, and support local journalism with a subscription as well.
  • Use good tools. I like Flipboard, and organize all ‘read later’ content into Pocket, which is my goto for the times I would normally have opened Facebook. Remember we’re dealing with addiction — replace habits with new ones.
  • Watch Netflix or read a book. Step away from news and the world and escape. ‘Attention Theft’ of Facebook really makes sense to me now I realize how many extended periods of time are available to me.
  • Follow people unlike yourself on Twitter. I know Twitter has issues, but one thing at a time.

Step 4 — Influence others

I feel like a tiny drop in the ocean, but when people tell me their <insert information thing here> is on Facebook, I tell them I’m not on Facebook and so require another way. I see others doing this too. Even public pages on Facebook are not public — they’re draped in a kind of ‘free membership paywall’, that hides half the page if you’re not logged in.

Facebook groups are not good for forums, there are (much, much) better and open source forums. Suggest alternatives.

Tell people why you’re not on Facebook, but not in an arrogant kind of way — more like ‘I quit smoking because my kids need me to live’ kind of way that makes people reflect on their own health.

Public Pages are trapped in a ‘Free Membership Paywall’

Step 4 — Turn off Facebook Messenger

Turn off Facebook Messenger. I haven’t done this yet, but I am using it less and less. I probably use it 3 x a week for people I haven’t moved over to other communications yet.

Go reclaim the web.


Feature Photo by Marco Gomes Attribution-NonCommercial-ShareAlike License

Cross posted to Medium

FacebookTwitterGoogle+Share

Daniel Stenbergcurl doesn’t spew binary anymore

One of the least favorite habits of curl during all these years, I’ve been told, is when users forget to instruct the command line tool where to store the downloaded file and as a direct consequence, curl instead sends a lot of binary “gunk” to the terminal. The end result of that is at best just a busload of weird-looking characters on the screen, but with just a little bit of bad luck it can also lock up the terminal completely or change it in other ways.

Starting in curl 7.55.0 (from this commit), curl will inspect the beginning of each download that has been told to get sent to the terminal (tty!) and attempt to detect and prevent raw binary output to get sent there. The code is only simply looking for a binary zero in the data.

$ curl https://example.com/image.jpg
Warning: Binary output can mess up your terminal. Use "--output -" to tell curl to output it to your terminal anyway, or consider "--output <FILE>" to save to a file.

As the warning message says, there’s an option to use to switch off this emergency check for when you truly know what you’re doing and you don’t need curl to prevent you from doing this. Then you just tell curl explicitly that you want the output to stdout, with “–output -” (or “-o -” for a shorter version):

$ curl -o - https://example.com/binblob.img

We’re eager to get your input and feedback on how this works. We are aware of the risk of false positives for UTF-16 and UTF-32 outputs, but we think they are rare enough to not make this a huge problem.

This feature should be able to drastically reduce the risk for this:

Pipes

(Update, added after the initial posting.)

So many have remarked or otherwise asked how this affects when stdout is piped into something else. It doesn’t affect that! The whole point of this check is to only show the warning message if the binary output is sent to the terminal. If you instead pipe the output to another program or if you redirect the output with >, that will not trigger this warning but will instead continue just like before. Just like you’d expect it to.

Air MozillaWebdev Beer and Tell: June 2017

Webdev Beer and Tell: June 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Will Kahn-GreeneThe Soloists

Building Firefox is a big endeavor. There are many teams and projects covering initiatives, maintenance, bug fixing, triage, localization, support, understanding feedback, marketing, communication, releasing, supporting infrastructure, crash analysis, and a bazillion other activities all to build a family of browsers and applications.

Teams and projects aren't static. People move around as priorities change and the landscape shifts and projects complete or are scuttled.

Sometimes projects get started up with a single person. Sometimes all the people except one move off a project. Sometimes we find ourselves working alone, in a basement office, with only a stapler equivalent to keep us company.

We are the soloists. You wouldn't believe the list of things we work on. Alone.

Where to find soloists: IRC, Slack

There's an IRC channel #soloists on irc.mozilla.org.

There's also a Slack channel #soloists on the Mozilla Slack [1].

These two places (and whatever other places soloists want to hang out at) are places where we can:

  • find some solace from the weary drudgery of being alone on their projects for days on end
  • ask for help
  • bounce ideas off each other
  • vent frustrations in a friendly forgiving place
  • get advice on dealing with things like code reviews and how to go on vacation
  • get recognition for a job well done

and a variety of other things that alleviate many of the problems we have as soloists.

[1]I just created it, so it's kind of empty. I'm feeling alone in the #soloists Slack channel. So alone.

Stickers at the All Hands!

Over the last month or so, we spent some time figuring out #soloists stickers because we like stickers and you like stickers and everyone likes stickers.

They look like this:

/images/soloist_2017_handdrawn.thumbnail.png

Soloist 2017 sticker.

They're 2" by 2" and round. They're warm to the touch. They make you want to climb things. By yourself. Alone. With appropriate safety gear. [2]

If you're a soloist, come find one of us and get a sticker. Also, consider joining soloist channels.

If you support soloists, come find one of us and get a sticker. Ask us about the things we're working on. We may be solo, but we're working on real projects that almost certainly affect you. As a group, we did great things in the last 6 months. Alone. So alone.

[2]That's how they make me feel, anyhow.

Daniel Stenbergcurl: read headers from file

Starting in curl 7.55.0 (since this commit), you can tell curl to read custom headers from a file. A feature that has been asked for numerous times in the past, and the answer has always been to write a shell script to do it. Like this:

#!/bin/sh
while read line; do
  args="$args -H '$line'";
done
curl $args $URL

That’s now a response of the past (or for users stuck on old curl versions). We can now instead tell curl to read headers itself from a file using the curl standard @filename way:

$ curl -H @headers https://example.com

… and this also works if you want to just send custom headers to the proxy you do CONNECT to:

$ curl --proxy-headers @headers --proxy proxy:8080 https://example.com/

(this is a pure curl tool change that doesn’t affect libcurl, the library)

Ehsan AkhgariQuantum Flow Engineering Newsletter #13

I’m back with some more updates on another week worth of work on improving various performance aspects of Firefox.

Similar to the past weeks, Speedometer remains a big focus area for performance work.  In addition to the many already identified bugs to work on, we are also still measuring the benchmark quite actively looking for more optimization opportunities.

Another item worthy of an update is Background Hang Reports.  Michael Layzell earlier today enabled collection of native stack traces on Win64 (and Mac) using the Gecko Profiler stack walking backend (Linux support soon to follow).  Because we are now using the Gecko Profiler backend for BHR, we can soon get interleaved native and pseudo-stacks from BHR similar to the ones that we have come to know and love in Gecko Profiler for a long time now!  Also, Doug Thayer has made a lot of progress on hangs.html, his front-end for exploring the native stack traces uploaded from BHR.  This is a nice and super fast tool to explore the hangs that our users are experiencing on the Nightly channel and it shows you the corresponding pseudo-stacks that are extremely helpful if for example the hang is coming from chrome-privileged JS (where we get full call stack information through telemetry).  Please have a look, and send him feedback.

This edition is exceptionally short, but the most interesting part of these is probably the last part anyway, the credits section, where I acknowledge the hard work of the people who worked on improving the performance of Firefox in the past week.  So let’s get to that, and I do hope I’m not dropping any names:

Mike HommeyAnnouncing git-cinnabar 0.5.0 beta 2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.0 beta 1?

  • Enabled support for clonebundles for faster clones when the server provides them.
  • Git packs created by git-cinnabar are now smaller.
  • Added a new git cinnabar upgrade command to handle metadata upgrade separately from fsck.
  • Metadata upgrade is now significantly faster.
  • git cinnabar fsck also faster.
  • Both now also use significantly less memory.
  • Updated git to 2.13.1 for git-cinnabar-helper.

Daniel Stenbergcurling over HTTP proxy

Starting in curl 7.55.0 (this commit), curl will no longer try to ask HTTP proxies to perform non-HTTP transfers with GET, except for FTP. For all other protocols, curl now assumes you want to tunnel through the HTTP proxy when you use such a proxy and protocol combination.

Protocols and proxies

curl supports 23 different protocols right now, if we count the S-versions (the TLS based alternatives) as separate protocols.

curl also currently supports seven different proxy types that can be set independently of the protocol.

One type of proxy that curl supports is a so called “HTTP proxy”. The official HTTP standard includes a defined way how to speak to such a proxy and ask it to perform the request on the behalf of the client. curl supports using that over either HTTP/1.1 or HTTP/1.0, where you’d typically only use the latter version if you the first really doesn’t work with your ancient proxy.

HTTP proxy

All that is fine and good. But HTTP proxies were really only defined to handle HTTP, and to some extent HTTPS. When doing plain HTTP transfers over a proxy, the client will send its request to the proxy like this:

GET http://curl.haxx.se/ HTTP/1.1
Host: curl.haxx.se
Accept: */*
User-Agent: curl/7.55.0

… but for HTTPS, which should provide end to end encryption, a client needs to ask the proxy to instead tunnel through the proxy so that it can do TLS all the way, without any middle man, to the server:

CONNECT curl.haxx.se:443 HTTP/1.1
Host: curl.haxx.se:443
User-Agent: curl/7.55.0

When successful, the proxy responds with a “200” which means that the proxy has established a TCP connection to the remote server the client asked it to connect to, and the client can then proceed and do the TLS handshake with that server. When the TLS handshake is completed, a regular GET request is then sent over that established and secure TLS “tunnel” to the server. A GET request that then looks like one that is sent without proxy:

GET / HTTP/1.1
Host: curl.haxx.se
User-Agent: curl/7.55.0
Accept: */*

FTP over HTTP proxy

Things get more complicated when trying to perform transfers over the HTTP proxy using schemes that aren’t HTTP. As already described above, HTTP proxies are basically designed only for doing HTTP over them, but as they have this concept of tunneling through to the remote server it doesn’t have to be limited to just HTTP.

Also, historically, for decades people have deployed HTTP proxies that recognize FTP URLs, and transparently handle them for the client so the client can almost believe it is HTTP while the proxy has to speak FTP to the remote server in the other end and convert it back to HTTP to the client. On such proxies (Squid and Apache both support this mode for example), this sort of request is possible:

GET ftp://ftp.funet.fi/ HTTP/1.1
Host: ftp.funet.fi
User-Agent: curl/7.55.0
Accept: */*

curl knows this and if you ask curl for FTP over an HTTP proxy, it will assume you have one of these proxies. It should be noted that this method of course limits what you can do FTP-wise and for example FTP upload is usually not working and if you ask curl to do FTP upload over and HTTP proxy it will do that with a HTTP PUT.

HTTP proxy tunnel

curl features an option (–proxytunnel) that lets the user forcible tell the client to not assume that the proxy speaks this protocol and instead use the CONNECT method with establishing a tunnel through the proxy to the remote server.

It should of course be noted that very few deployed HTTP proxies in the wild allow clients to CONNECT to whatever port they like. HTTP proxies tend to only allow connecting to port 443 as that is the official HTTPS port, and if you ask for another port it will respond back with a 4xx response code refusing to comply.

Not HTTP not FTP over HTTP proxy

So HTTP, HTTPS and FTP are sent over the HTTP proxy fine. That leaves us with nineteen more protocols. What happens with them when you ask curl to perform them over a HTTP proxy?

Now we have finally reached the change that has just been merged in curl and changes what curl does.

Before 7.55.0

curl would send all protocols as a regular GET to the proxy if asked to use a HTTP proxy without seeing the explicit proxy-tunnel option. This came from how FTP was done and grew from there without many people questioning it. Of course it wouldn’t ever work, but also very few people would actually attempt it because of that.

From 7.55.0

All protocols that aren’t HTTP, HTTPS or FTP will enable the tunnel-through mode automatically when a HTTP proxy is used. No more sending funny GET requests to proxies when they won’t work anyway. Also, it will prevent users from accidentally leak credentials to proxies that were intended for the server, which previously could happen if you omitted the tunnel option with a few authentication setups.

HTTP/2 proxy

Sorry, curl doesn’t support that yet. Patches welcome!

Justin DolskePhoton Engineering Newsletter #6

More exciting progress this week! Here’s Photon update #6!

New Menus

Work on the new Photon menus has reached the point where we’re ready to turn them on by default (for Nightly). Bug 1372309 is tracking the last remaining work (mostly test fixes), and you should see this happen in tomorrow’s Nightly. Up until now you’ve needed to manually enable the “browser.photon.structure.enabled” pref to play with the new menus – you’ll no longer need to flip that pref as it will already be enabled.

The biggest change you’ll notice is that the application menu (a.k.a. the “hamburger menu”) contents look different. Instead of a grid of icons, it’s a linear list of commands. Opening the menu and entering submenus is much snappier than before. Here’s the new look on Windows 10 (left) and macOS (right):

menus

The overflow menu (under the “>>” icon) has existed for a long time now, normally it’s only shown when the window is so narrow that we run out of space to show all the toolbar icons. You can now pin items to it permanently, as the new destination for commands you want easily accessible without taking up toolbar space. (Previously you could do this by adding items to the hamburger menu. That’s no longer customizable.)

overflows

There are also some minor related changes to Customization Mode, which now shows the overflow menu as a customization target instead of the old hamburger menu.

Recent changes

Menus/structure:

  • Enabling the new menus, as mentioned above.
  • The sidebar toolbar button no longer has a panel dropdown, instead it just toggles the display of the sidebar (you can change which sidebar is shown from inside the sidebar itself).
  • Various smaller styling/polish fixes to the different panels and toolbar items have landed and will continue to land this week.
  • WebExtension browser actions will now be pinned to the overflow panel instead of the hamburger menu (though we are aware of at least one remaining issue with this).

 

Animation:

  • The Photon-themed download icon landed, this was spun out of the main download animation bug to start landing pieces as they’re ready.
  • Work continues on animations for downloads toolbar button, stop/reload button, and page loading indicator. We’re working through some performance issues with the latter two — these animations are triggered during our performance test suites, and we see some impact to the measurements.
  • New arrow-panel animations are underway. We’re updating the way panels and menus animate when they’re opened and closed. On macOS we’re temporarily removing the current animation entirely, while we await platform improvements that allow us to get the effect we want in a way that performs well.

 

Preferences:

  • QA-sign off received for the old preferences shipping in Firefox 55 (which have not been the default on Nightly since landing the new preference reorg).
  • Search followups are largely complete, and we are enabling the search feature this week.
    search-prefs-demo

 

Visual redesign:

  • We got some good contributions from community member UK92! Thanks!
    • Updated two of our in-content pages (about:about and about:rights) to use the new Photon style.
    • With maximized windows on Windows 10, the window control buttons now span the entire height of the tabstrip, eliminating a small gap.
  • Landing updates to the sidebar styling (header and search box)
  • Updated the Synced Tabs button icon in the toolbar.
  • Starting work on changing the color of the titlebar on macOS (making it darker, similar to Windows 10).

 

Onboarding:

  • Lots of discussion and decisions, finalized scope and content for Firefox 56 tour.
  • De-scoped automigration, and are instead moving ahead with a manual import option accessible from the new Activity Stream page.
  • Simplified tour and notification logic
  • Outstanding technical issues resolved and a few 56 tour contents are ready to land this week. No more blank tour overlay in Nightly!

 

Performance:

 

Stay tuned for more updates next week!


Chris McDonaldMessage Broker: Channel Naming

I’ve started building a message broker as a learning project. There are several out there such a RabbitMQ, Kafka, Redis’ pub/sub layer, and some brokerless message queue solutions like 0mq. Having used many of them over the years and studying the topic both from the classic “Enterprise Integration” side and the more modern/agile “Microservices” side, I figured I’d try my hand at implementing one.

In this blog post, I’m going to go over how I designed and implemented channel naming. Channel names fill the role of data descriptor used by the publishers and the role of query language for subscribers. This turned out to be a delightful series of problems to explore. Questions such as “how do people use them”, “what features are expected”, “what limitations are common” came up. Realizing that my message broker is essentially providing a naming framework that will have at least some opinion, I needed to ask “are there practices I want to encourage or discourage” and recognize my influence.

In almost all message queue systems, channels have a string based name. This name may be broken up by delimiters, and that delimiter is sometimes chosen by the client or in a config on the broker. Most support the ability to wild card parts of the channel when subscribing. Sometimes the wild card is purely string based, in delimited channels often the wild card is done by chunk. Wild cards are sometimes restricted in what positions they can be in, almost always allowed at the end, sometimes in the middle and sometimes disallowed at the start.

When deciding between these I also wanted to keep in mind what solutions are relatively easy to code, straightforward to debug, and allow for acceptable performance. Not allowing wild cards at all means that simple string comparison works, but omits a common feature. Using simple strings and a well known text query setup like regular expressions would give huge amounts of flexibility in selecting what messages you’d like, but comes with a large performance cost, libraries, and is much harder to spot debug. I decided that since I’m not going to use a common query language, that I’d want to make the names as simple as possible to parse.

A Rope is a data structure I’d heard about a few times and I knew it had something to do with making string operations faster, but I was hazy on the details. Since this is a project without deadlines, I set about reading a paper on them to learn more. Shortly into the paper it was clear this wasn’t the solution for me, ropes are designed for larger bodies of text, manipulating that text in a variety of ways, and making common editor features easier to implement. But it shared that part of the problem was breaking down the text, and part of it was comparing text.

Since channel names are often a hierarchy that uses the delimiter to separate the layers, I split the name using that delimiter into an array. But that then left me with a bunch of smaller strings I had to compare, which seemed much slower than just walking through the name and query once, using the wild cards to skip characters. To avoid a char by char parse on every channel name comparison, I drew on the common native layer practice of hashing strings then comparing the hashes. Hashing has an up front cost of processing the string into a numeric form, but then makes comparisons extremely fast. Since a message’s channel would be used to query for appropriate subscribers, it could be hashed once and compared many times. An unfortunate side effect of hashing the substrings though, I wouldn’t be able to allow partial segment matches. The wild card would be all or nothing in a given position, just a.* no a.b*.

I decided that side effect of only allowing wild cards at the segment layer was ultimately a good thing. While there may be transition times where the feature of partial segment matching would help, it would allow people to break the idea that each segment is a complete chunk of data. This similar reasoning is why the only selectors are exact and wild card, no numeric or alpha only sorts of selection.

After all this research, thinking, note taking, and general meandering thing the computer science fields, I started to implement my solution in Rust. My message broker isn’t actually ready for the channel names yet, as I’m still working on how to manage connections and properly do the various forms of store and forward. But this problem tickled me so I set about solving it anyway. I created a file channel.rs to try building these ideas.

I started with a basic struct with the string form of the name for debugging, and a hashed form of the name for comparisons.

struct Channel {
    raw_name: String,
    hashed_name: Vec<Option<u64>>
}

raw_name is a String so the struct can own the string without any lifetime concerns of a str, this value will mostly be used for debugging or admin purposes. The hashed_name is a Vec so it can be variable sized, currently I have no limit on the number of delimiters you can use. Option is what I used to handle the wild card. If it was Some<u64> then you have a hash to compare. If it was None then it was wild card and you don’t have to compare it. After thinking harder though, I realized that I didn’t want to have the binary Option as my indicator for whether to use a wild card or not. If I added a new type of wild card, for instance, one that allowed any number of segments, I’d have to replace my Option usage everywhere. So instead I’ve preemptively changed to using my own ChannelSegment type like so:

struct Channel {
    raw_name: String,
    hashed_name: Vec<ChannelSegment>
}

enum ChannelSegment {
    Wild,
    Hash(u64)
}

Next, I set about parsing the input. I knew that hard coding strings in tests meant that I wanted to have a from_str variant. People will often have hard coded channel names but there will also be generated ones, and for that allowing a from_string is nice. I also knew I was going to turn it into a String anyway to assign to raw_name. So I did the following to enable both:

pub fn from_str(input: &str) -> Result<Channel, String> {
    Channel::from_string(input.to_owned())
}

pub fn from_string(input: String) -> Result<Channel, String> {
    // parsing code goes here
}

Parsing was a pretty simple matter. Using String.split(char) to get an iterator returning each segment. Then relying on Rust’s pattern matching for cases I specifically cared about matching, like empty string (which I made into an error to prevent mistakes) and "*" which is the wild card character. Building up the hashed name, then returning it.

let mut hashed_name = Vec::new();
for chunk in input.split('.') {
    match chunk {
        "" => {
            return Err("empty entry is invalid".to_owned());
        }
        "*" => {
            hashed_name.push(ChannelSegment::Wild);
        }
        _ => {
            hashed_name.push(ChannelSegment::Hash(calculate_hash(&chunk)));
        }
    }
}
Ok(Channel{
    raw_name: input,
    hashed_name: hashed_name
})

One could easily see using a LRU Cache (Least Recently Used cache that removes the oldest entries when it gets too full) to skip parsing the channel name for the most commonly used channels, but I’m not doing that yet until this proves to be a part that is slowing me down.

To compare between two Channels I added a .matches(&Channel) method. I decided against implementing PartialEq since I wouldn’t be testing for exact match, but instead match when considering wild cards, and most developers expect a more exact match when using ==.

pub fn matches(&self, other: &Channel) -> bool {
    if self.hashed_name.len() != other.hashed_name.len() {
        return false;
    }
    for (a, b) in self.hashed_name.iter().zip(&other.hashed_name) {
        if let (&ChannelSegment::Hash(inner_a), &ChannelSegment::Hash(inner_b)) = (a, b) {
            if inner_a != inner_b {
                return false;
            }
        }
    }
    
    true
}

Since my only wild card is for a single whole segment, I know I can immediately return if they have different lengths. Then I zip the two hashed_name together so I can iterate through them at the same time. In other languages one would commonly just create an integer, increment that and use it to index into both of the sequences at the same time, ensuring to not walk past the end ourselves. In Rust we rely on iterators to give us fast access to our sequences with minimal checking, keeping us safe against others (or ourselves) mutating the sequences in dangerous ways while iterating. Using zip we can create an iterator that walks both sequences at the same time keeping our to our land of safety and speed.

As a side note, the calculate_hash function is one I pulled from the docs but changed a little bit to fit my style better:

fn calculate_hash<T: Hash>(t: &T) -> u64 {
    let mut hasher = DefaultHasher::new();
    t.hash(&mut hasher);
    hasher.finish()
}

A feature of Rust I really enjoyed while developing this, was the ability to write tests in the same file as I went. I could quickly write a few examples then make the code pass, focusing on just the higher level parts of the API and not testing the internals so much. Think more “this errors, this does not error” and less “this returns a vector of integers that are ascending…” while you are adding such tests, to make sure you can quickly change out the implementation while the idea keeps working, here are the tests I wrote as I was developing:

#[test]
fn create_basic_channel() {
    Channel::from_str("a.b.c").unwrap();
    Channel::from_str("name").unwrap();
}

#[test]
fn create_with_wildcard() {
    Channel::from_str("a.*.b").unwrap();
    Channel::from_str("*").unwrap();
    Channel::from_str("*.end").unwrap();
    Channel::from_str("start.*").unwrap();
}

#[test]
fn create_invalid() {
    assert!(Channel::from_str(".a.b").is_err());
    assert!(Channel::from_str("c.b.").is_err());
    assert!(Channel::from_str("g.l..b").is_err());
    assert!(Channel::from_str("").is_err());
}

#[test]
fn matches_with_self_exact() {
    let channel = Channel::from_str("a.b.c").unwrap();
    assert!(channel.matches(&channel));
    let channel = Channel::from_str("a").unwrap();
    assert!(channel.matches(&channel));
    let channel = Channel::from_str("dabbling.b.c").unwrap();
    assert!(channel.matches(&channel));
    let channel = Channel::from_str("abba.bobble").unwrap();
    assert!(channel.matches(&channel));
}

#[test]
fn not_matches_exact() {
    let channel_full = Channel::from_str("s.t.r").unwrap();
    let channel_sub = Channel::from_str("s.t").unwrap();
    assert!(!channel_full.matches(&channel_sub));
    assert!(!channel_sub.matches(&channel_full));
}

#[test]
fn matches_with_self_wild() {
    let channel = Channel::from_str("a.*.c").unwrap();
    assert!(channel.matches(&channel));
    let channel = Channel::from_str("*").unwrap();
    assert!(channel.matches(&channel));
    let channel = Channel::from_str("*.b.c").unwrap();
    assert!(channel.matches(&channel));
    let channel = Channel::from_str("abba.*").unwrap();
    assert!(channel.matches(&channel));
}

#[test]
fn matches_wild_card_one_side() {
    let wild_channel = Channel::from_str("alpha.beta.*").unwrap();
    let tame_channel = Channel::from_str("alpha.beta.charlie").unwrap();
    assert!(wild_channel.matches(&tame_channel));
    assert!(tame_channel.matches(&wild_channel));
}

Mostly mundane stuff, values all quickly typed in to ensure the general concept works, and common edge cases in parsing (start, end, middle) are covered. But note this isn’t combinatorially testing this, there isn’t unicode or other trouble points, those sorts of tests can come with time. Being able to add in a new test quickly when I noticed an edge case is easily one of my favorite features of Rust.

I’ll be open sourcing my work in progress soon, but for now it mostly lives in my notebook as some scribbles, and a smattering of mostly disconnected code on my laptop. If you have feedback on this blog post, how I’ve setup channels in my message broker, or generally about my rust code I’d love to hear it in comments below. Before the comments roll in though, I know using Strings for errors is not great, I just don’t have an error system setup in my project yet.


Hacks.Mozilla.OrgNetwork Monitor Reloaded (Part 1)

The Network Monitor tool has been available in Firefox since the earliest days of Firefox Dev Tools. It’s an invaluable tool for anyone who cares about page load performance and fast modern web pages. This tool went through extensive refactoring recently (under the project codename Netmonitor.html), and this post is intended as an explanation of how we designed the new architecture and what cool new technologies we used.

See the Network Monitor running inside the Firefox Developer Toolbox:

Goals

One of the main goals of the refactoring was to rebuild the entire tool on top of standard web technologies. We removed all Firefox-specific legacy code like XUL (XML User Interface Language) but also the code that used Firefox-specific APIs. This is a great step forward since using web standards now allows you to run the entire tool’s code base in two different environments:

  • The Developer Toolbox
  • Any web page

The first case is well known to anyone who’s familiar with Firefox Developer Tools (see also the screenshot above). The Developer Toolbox can easily be opened at the bottom of a browser window with various tools, Network Monitor included, at your fingertips.

The second use case is new. Now the tool can be loaded within a browser tab just like any other standard web application. See how it looks like in the next screenshot:

Note that the page is loaded from localhost:8000. This is where the development server is running.

The ability to run the tool as a web app is a big deal! Now we can use all in-browser tools for the development workflow. Although it was possible to use DevTools to debug DevTools before (with the Browser Toolbox), it is now so much easier and more convenient to simply use the in-browser tools. And of course, we can also load the tool in other browsers. The development is also simpler since we don’t have to build Firefox. Instead, a simple tab-refresh is enough to get Network Monitor reloaded and test your code changes.

Architecture

We’ve build the new Network Monitor front-end on top of the following technologies:

Firefox Developer Tools need complex UI features and we are using the popular React & Redux combo for all of our tools to build a clean and consistent code base. The Network Monitor is no exception. We’ve implemented a set of React components that are responsible for rendering the view (UI), a store with all data collected by HTTP interception and finally a set of actions the user might want to execute.

We’ve also changed the way we write tests. Instead of using the Firefox specific test harness we are slowly shifting towards well known libraries like Mocha and Enzyme. This way it is easier to understand our code base and also contribute to it.

We are using Webpack to build a bundle when running inside a web page. The bundle is consequently served through localhost:8000.

The general architecture is based on a flow introduced in the React & Redux concept.

  • The root component representing the NetMonitorApp can be rendered within Developer Toolbox or a web page.
  • Actions are responsible for things like filtering, clearing the list of requests, sorting and opening a side panel with detailed information.
  • All of our data are stored within a store object. Including all the collected data about HTTP traffic.

New Features

We’ve been focused mostly on codebase refactoring, but there were some new features/UI improvements implemented along the way as well. Let’s see some of them.

Column Picker

There are new columns with additional information about individual requests and the user can use the context menu to select those that are important.

Summary Data

We’ve implemented a better summary for currently displayed requests in the list. It’s now located at the bottom of the panel.

  • Number of requests in the list
  • Size/transferred size of all requests
  • Total time needed to load all requests
  • Time when DomContentLoaded event occurred
  • Time when load event occurred

Filtering By Properties

The existing Filter UI is now a lot more powerful. It’s possible to filter the list of requests according to various properties. For example, you can type: larger-than:50 into the Filter input box to see only those requests that are larger than 50 bytes.

Read more about filtering by properties on MDN.

Learn More in MDN

There are links in many places in the UI pointing to MDN for more information. For example, you can quickly learn how various HTTP headers are used.

Conclusion

We believe that building the new generation of Firefox Developer Tools on top of web standards is the right way to go since it means the tools can run in different environments and integrate more effectively with other projects (e.g., IDEs). Building on web standards makes many things possible: Now we can also think about shipping our tools as an online web service that can benefit from the internet platform. We can share collected data as well as debugging context across the web, opening doors to a real social debugging world.

The Netmonitor.html team has done a tremendous amount of work on the refactoring. Big thanks to the core team:

  • Ricky Chien
  • Fred Lin

But there has been many external contributors as well:

  • Jaroslav Snajdr
  • Leonardo Couto
  • Tim Nguyen
  • Deepjyoti Mondal
  • Locke Chen
  • Michael Brennan
  • Ruturaj Vartak
  • Vangelis Katsikaros
  • Adrien Enault
  • And many more…

Let us know what you think. You can join us on the devtools-html Slack.

Jan ‘Honza’ Odvarko

Read next: Hacking on the Network Monitor

Hacks.Mozilla.OrgHacking on the Network Monitor Developer Tool (Part 2)

In the previous post, Network Monitor Reloaded, we walked through the reasoning for refactoring the Network Monitor tool. We also learned that using web standards for building Dev Tools enables us to running them in different environments – loaded either within the Firefox Developer Toolbox or within a browser tab as a standard web application.

In this companion article, we’ll show you how to try these things and see the Network Monitor in action.

Get to the Source

The Firefox Developer Tools code base is currently part of the Firefox source repository, and so, downloading it requires downloading the entire repo. There are several ways how to get the source code and work on the code. You might want to start with our Github docs for detailed instructions.

One option is to use Mercurial and clone the mozilla-central repository to get a local copy.

# This may take a while...
hg clone https://hg.mozilla.org/mozilla-central
cd mozilla-central

Part of our strategy to use web standards to build tools for the web also involves moving our code base from Mercurial to Git (on github.com). So, ultimately, the way to get the source code will change permanently, and it will be easier and faster to clone and work with.

Run Developer Toolbox

For now, if you want to build the Network Monitor and run it inside the Firefox Developer Toolbox, follow these detailed instructions.

Essentially, all you need to do is use the mach command.

cd mozilla-central
./mach build

After the build is complete, start the compiled binary and open the Developer Toolbox (Tools -> Web Developer -> Toggle Tools).

You can rebuild quickly after making changes to the source code as follows:

./mach build faster

Run Development Server

In order to run Net Monitor inside a web page (experimental) you’ll need to install the following packages:

We’ve developed a simple container that allows running Firefox Dev Tools (not only the Network Monitor) inside a web page. This is called Launchpad. The Launchpad is responsible for making a connection to the instance of Firefox being debugged and loading our Network Monitor tool.

The following diagram depicts the entire concept:

  • The Net Monitor tool (client) is running inside a Browser tab just like any other standard web application.
  • The app is served by the development server (server) through localhost:8000
  • The Net Monitor tool (client) is connecting to the target (debugged) Firefox instance through a WebSocket.
  • The target Firefox instance needs to listen on port 6080 to allow the WebSocket connection to be created.
  • The development server is started using yarn start

Let’s take a closer look at how to set up the development environment.

First we need to install dependencies for our development server:

cd mozilla-central
cd devtools/client/netmonitor
yarn install

Now we can run it:

yarn start

If all is ok, you should see the following message:

Development Server Listening at http://localhost:8000

Next, we need to listen for incoming connection in the target Firefox browser we want to debug. Open Developer Toolbar (Tools -> Web Developer -> Developer Toolbar) and type the following command into it. This will start listening so tools can connect to this browser.

listen 6080

The Developer Toolbar UI should be opened at the bottom of the browser window.

Finally, you can load localhost:8000

You should see the Launchpad user interface now. It lists the opened browser tabs in the target Firefox browser. You should also see that one of these tabs is the Launchpad itself (the last net monitor tab running from localhost:8000).

All you need to do is to click one of the tabs you want to debug. As soon as the Launchpad and Network monitor tools connect to the selected browser tab, you can reload the connected tab and see a list of HTTP requests.

If you change the underlying source code and refresh the page you’ll see your changes immediately.

Check out the following screencast for a detailed walk-through of running the Network monitor tool on top of the Launchpad and utilizing the hot-reload feature to see code changes instantly.

You might also want to read mozilla-central/devtools/client/netmonitor/README.md for more detailed info about how to build and run the Network Monitor tool.

Future Plans

We believe that building tools for the web using standard web technologies is the right way to go! Our tools are for web developers. We’d like you to be able to work with our tools using the same skills and knowledge that you already apply when developing web apps and services.

We are planning many more powerful features for Firefox Dev Tools, and we believe that the future holds a lot of exciting things. Here’s a teaser for what’s ahead on the roadmap.

  • Connecting to Chrome
  • Connecting to NodeJS
  • Integration with existing IDEs

Stay tuned!

Jan ‘Honza’ Odvarko

Air MozillaReps Weekly Meeting Jun. 15, 2017

Reps Weekly Meeting Jun. 15, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Ryan HarterBad Tools are Insidious

This is my first job making data tools that other people use. In the past, I've always been a data scientist - a consumer of these tools. I'm learning a lot.

Last quarter, I learned that bad tools are often hard to spot even when they're damaging productivity. I sum this up by saying that bad tools are insidious. This may be obvious to you but I'm excited by the insight.

Bad tools are hard to spot

I spent some time working directly with analysts building ETL jobs. I found some big usability gaps with our tools and I was surprised I wasn't hearing about these problems from our analysts.

I looked back to previous jobs where I was on the other side of this equation. I remember being totally engrossed in a problem and excited to finding a solution. All I wanted were tools good enough to get the job done. I didn't care to reflect on how I could make the process smoother. I wanted to explore and interate.

When I dug into analyses this quarter, I had a different perspective. I was working with the intention of improving our tools and the analysis was secondary. It was much easier to find workflow improvements this way.

In the Design of Everyday Things Donald notes that users tend to blame themselves when they have difficulty with tools. That's probably part of the issue here as well.

Bad tools hurt

If our users aren't complaining, is it really a problem that needs to get fixed? I think so. We all understand that bad tools hurt our productivity. However, I think we tend to underestimate the value of good tools when we do our mental accounting.

Say I'm working on a new ETL job that takes ~5 minutes to test by hand but ~1 minute to test programatically. By default, I'd value implementing good tests at 4 minutes per test run.

This is a huge underestimate! Testing by hand introduces a context shift, another chance to get distracted, and another chance to fall out of flow. I'll bet a 5 minute distraction can easily end up costing me 20 minutes of productivity on a good day.

Your tools should be a joy to use. The better they work, the easier it is to stay in flow, be creative, and stay excited.

In Summary

Don't expect your users to tell you how to improve your tools. You're probably going to need to eat your own dogfood.

Daniel Stenbergtarget-independent libcurl headers

We write libcurl to be very portable. It can be built and run on virtually every operating system with an CPU architecture that is at least 32 bit, from some of the most legacy Unixes from the early 90s to the most recent updates to all the popular systems, including the widespread mobile platforms.

Type sizes on different archs

In the early 2000s we added support to libcurl for “large files” (back in the days when that support wasn’t always present in your operating systems) and large variable types (beyond 32 bits) to work for applications and libcurl alike, and to work the same way for libcurl-using applications independently on which platform you’d compile the code on.

We started out using compiler/system defines to figure out for example the size of the native “off_t” type to know if it was 32 bit or 64 bit. That turned out to be problematic as users accidentally ended up in situations where the library considered a type to be one size and the application considered it to be another, leading to unexpected behaviors at best or downright crashes and misery.

Determine at lib build-time

The fix to that run-time size-of-variables confusion was to generate a fixed “outcome” at build-time that would then be used by both the library and applications so that they could never again disagree on this. The obvious downside here was that we had to generate this target specific information into public headers for the library (known as curl/curlbuild.h). We didn’t like doing it this way, but this approach was a better situation than before as it caused less headaches for users.

Now we instead created problems for system packagers who wanted to provide a set of curl headers and allow users to build for example either a 32 bit build or a 64 bit build of their application – so they had to generate two sets of curl headers. Or having the headers on a shared file system to be used by many different systems. Inconvenient. But as this solution didn’t hurt too many people, was a cumbersome problem to fix and yet possible to work around, it remained in the curl project since August 7, 2008 (commit 14240e9e109fe6af1).

Determine at app build-time

In March 2017 (commit 9506d01ee5) we introduced a new take on this problem. A new header that checks systems defines and determines all the necessary information at the time the application is compiled instead of at the time libcurl is compiled. We call it curl/system.h.

The goal was to replace the generated curlbuild.h header, but since it would cause serious problems if this new header would get any different results (like variable type sizes) than the old header, it was a risky move. We needed extra seat-belts for this.

We therefor added the new header next to the old header in parallel, and introduced a test case in the curl test suite that verifies the output from the two systems and make sure that they agree, and had them present in the curl source tree, coexisting. The curl/system.h file of course without being used for anything real, but tested by everyone who runs the test suite – to make sure it isn’t awful.

We think the new header file has now proven itself worthy. We have not gotten any recent reports on problems with test 1541. It is time to cut out the old header system and launch the new!

Starting in release curl 7.55.0, due to be released on August 9, 2017, the header files will finally again be truly platform agnostic. It took us nine years but we finally did it! The bulk of the change is made in this commit.

Just another detail in the machinery.

Daniel PocockCroissants, Qatar and a Food Computer Meetup in Zurich

In my last blog, I described the plan to hold a meeting in Zurich about the OpenAg Food Computer.

The Meetup page has been gathering momentum but we are still well within the capacity of the room and catering budget so if you are in Zurich, please join us.

Thanks to our supporters

The meeting now has sponsorship from three organizations, Project 21 at ETH, the Debian Project and Free Software Foundation of Europe.

Sponsorship funds help with travel expenses and refreshments.

Food is always in the news

In my previous blog, I referred to a number of food supply problems that have occurred recently. There have been more in the news this week: a potential croissant shortage in France due to the rising cost of butter and Qatar's efforts to air-lift 4,000 cows from the US and Australia, among other things, due to the Saudi Arabia embargo.

The food computer isn't an immediate solution to these problems but it appears to be a helpful step in the right direction.

Mozilla Addons BlogWebExtensions in Firefox 55

Firefox 55 landed in Beta this week, so it’s time for another update on WebExtensions. Because the development period for this latest release was about twice as long as normal, we have many more updates. Documentation for the APIs discussed here can be found on MDN.

APIs

The webRequest API has seen many improvements. Empty types and URLs in webRequest filters are now rejected. Requests can be cancelled before cookie processing occurs. Websockets can be processed through webRequest using the ws:// and wss:// protocols. Requests from the top frame now have the correct frameId, and more error conditions on requests are picked up by the onErrorOccurred event.

The sidebar API now re-opens automatically if you reload the add-on using about:debugging or web-ext. If you are following along with project Photon, you’ll note that the sidebar works great with the new Photon designs. Shiny!

The runtime.onMessageExternal API has been implemented, which allows WebExtensions add-ons to communicate with other WebExtensions add-ons. The runtime.onInstalled API will now activate if an add-on is installed temporarily, and the event will now include the previousVersion of the extension.

In order to limit the amount of CSS that a developer has to write and provide some degree of uniformity, there is a browser_style option for the browserAction API. We’ve also provided this to options V2 and the sidebar APIs.

Context menus now work in browserAction popups. The onClickData event in the context menu also gets the frameID. Context menu clicks can now open browser actions, page actions and sidebars. To do this, specify _execute_browser_action, _execute_page_action or _execute_sidebar_action in the command field for creating a context menu.

If you load a page from your extension, you get a long moz-extension://…. URL in the URL bar. We’ve added a notification in the identity box to indicate that the page what extension loaded it.

Other changes include:

A new API is now available for the nsiProfiler. This allows the Gecko Profiler to be used without legacy add-on support. This was essential for the Quantum Flow work happening in Firefox. Because of the sensitive nature of the content and the limited appeal of this API, access to this API is currently restricted.

Permissions

With Firefox 55, the user interface for required and optional permissions is now enabled for WebExtensions add-ons. Required permissions and hosts will trigger a prompt on installation for the user. Here’s an example:

When an extension is updated and the hosts or permissions have changed, the current extension remains enabled, but the user has to accept the updated permissions in order to continue.

There is also a new user interface for side loading add-ons that is more consistent with other installation methods. Side loading is when extensions are installed outside of Firefox, by other software. It now appears in the hamburger menu as a notification:

This permissions dialog is slightly different as well:

Once an extension has been installed, if it would like more permissions or hosts, it can ask for those as needed — these are called optional permissions. They are accessible using the browser.permissions.request API. An example of using optional permissions is available in the example repository on github.

Developer tools

With the introduction of devtools.inspectedWindow.eval bindings, many more add-ons are now able to support WebExtensions APIs. The developer tools team at Mozilla has been reaching out to developers with add-ons that might be affected as you can see on Twitter. For example, the Redux DevTools extension is now a WebExtensions add-on using the same code base as other browsers.

An API for devtools.panels.themeName has been implemented. The devtools panel icon is no longer inverted if a light theme is chosen.

There have been some improvements to the about:debugging page:

These changes are aimed at improving the ease of development. Temporary extensions now appear at the top of the page, a remove button is present, help is shown if the extension has a temporary ID, the location of the extension on the file system is shown, and the internal UUID is shown.

Android

Firefox for Android has gained browserAction support. Currently a textual menu item is added to the bottom of the menu on Android. It supports browserAction.onClicked and setTitle and getTitle. Tabs support was added to pageAction.

Theming

The beginnings of theme support, as detailed in this blog post, has landed in Firefox. In Firefox 55 you can use the browser.theme.update API. The theme API allows you to set some key values in Firefox such as:

browser.theme.update({
 images: {
   headerURL: "header.png",
 },
 colors: {
   accentcolor: "#000",
   textcolor: "#fff",
 }
});

This WebExtensions API will apply the theme to Firefox by setting the header image and some CSS colors. At this point the theme API applies a very similar set of functionality as the existing lightweight theme. However, using this API you can do this dynamically in your extension.

Additionally, browser.management APIs have been implemented for themes. These allow you to enable and disable themes by only using the management API. For an example check out the example repository on github.

Proxy

The proxy API allows extension authors to insert proxy configuration files into Firefox. This API implementation is quite different from the one in Chrome to take advantage of some of the improved support in Firefox for proxies. As a result, to prevent confusion, this API is not present in the chrome namespace.

The proxy configuration file will contain a function for dealing with the incoming request:

function FindProxyForURL(url, host) {
 // ...
}

And this will then be registered in the API:

browser.proxy.registerProxyScript(filename).then();

For an example of using the proxy API, please see the repository on github.

Performance

One focus of the Firefox 55 release was the performance of WebExtensions, particularly the scenario where there is at least one WebExtension on startup.

These performance improvements include speeding up host matching, limiting the cloning of add-on messages, lazily loading APIs when they are needed. We’ve also been adding in some telemetry measurements into Firefox, such as background page load times and extension start up times.

The next largest performance gain is the moving of WebExtensions add-ons to their own process. To enable this, we made the debugging tools work seamlessly with out-of-process add-ons. We are hoping to enable this feature for Windows users in Firefox 56 once the remaining graphics issues have been resolved.

You can see some of the results of performance improvements in the Quantum newsletter which Ehsan posts to his blog. These improvements aren’t just limited to WebExtensions add-ons. For example, the introduction of off script decoding brought a large performance improvement to startup measurements for all Firefox users, as well as those with WebExtensions add-ons:

Community

As ever we need to thank the community who contributed to this release. This includes: Tushar Saini, Tomislav Jovanovic, Rob Wu, Martin Giger and Geoff Lankow. Thank you to you all.

The post WebExtensions in Firefox 55 appeared first on Mozilla Add-ons Blog.

Air MozillaThe Joy of Coding - Episode 102

The Joy of Coding - Episode 102 mconley livehacks on real Firefox bugs while thinking aloud.

Mozilla Addons BlogAdd-ons Update – 2017/06

Here’s the monthly update of the state of the add-ons world.

The Road to Firefox 57 explains what developers should look forward to in regards to add-on compatibility for the rest of the year. So please give it a read if you haven’t already.

The Review Queues

In the past month, our team reviewed 2,209 listed add-on submissions:

  • 1202 in fewer than 5 days (54%).
  • 173 between 5 and 10 days (8%).
  • 834 after more than 10 days (38%).

235 listed add-ons are awaiting review.

If you compare these numbers with last month’s, you’ll see a very clear difference, both in reviews done and add-ons still awaiting review. The admin reviewers have been doing an excellent job clearing the queues of add-ons that use the WebExtensions API, which are generally safer and can be reviewed more easily. There’s still work to do so we clear the review backlog, but we’re on track to being in a good place by the end of the month.

However, this doesn’t mean we won’t need volunteer reviewers in the future. If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Visit our wiki page for more information.

Compatibility Update

We published the blog post for 55 and the bulk validation script will be run in a week or so. The compatibility post for 56 is still a few weeks away.

Make sure you’ve tested your add-ons and either use WebExtensions or set the multiprocess compatible flag in your manifest to ensure they continue working in Firefox. And as always, we recommend that you test your add-ons on Beta.

You may also want  to review the post about upcoming changes to the Developer Edition channel. Firefox 55 is the first version that will move directly from Nightly to Beta.

If you’re an add-ons user, you can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Recognition

We would like to thank the following people for their recent contributions to the add-ons world:

  • Tushar Saini
  • harikishen
  • Geoff Lankow
  •  Trishul Goel
  • Andrew Truong
  • raajitr
  • Christophe Villeneuve
  • zombie
  • Perry Jiang
  • vietngoc

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/06 appeared first on Mozilla Add-ons Blog.

Hacks.Mozilla.OrgA crash course in memory management

This is the 1st article in a 3-part series:

  1. A crash course in memory management
  2. A cartoon intro to ArrayBuffers and SharedArrayBuffers
  3. Avoiding race conditions in SharedArrayBuffers with Atomics

To understand why ArrayBuffer and SharedArrayBuffer were added to JavaScript, you need to understand a bit about memory management.

You can think of memory in a machine as a bunch of boxes. I think of these like the mailboxes that you have in offices, or the cubbies that pre-schoolers have to store their things.

If you need to leave something for one of the other kids, you can put it inside a box.

A column of boxes with a child putting something in one of the boxes

Next to each one of these boxes, you have a number, which is the memory address. That’s how you tell someone where to find the thing you’ve left for them.

Each one of these boxes is the same size and can hold a certain amount of info. The size of the box is specific to the machine. That size is called word size. It’s usually something like 32-bits or 64-bits. But to make it easier to show, I’m going to use a word size of 8 bits.

A box with 8 smaller boxes in it

If we wanted to put the number 2 in one of these boxes, we could do it easily. Numbers are easy to represent in binary.

The number two, converted to binary 00000010 and put inside the boxes

What if we want something that’s not a number though? Like the letter H?

We’d need to have a way to represent it as a number. To do that, we need an encoding, something like UTF-8. And we’d need something to turn it into that number… like an encoder ring. And then we can store it.

The letter H, put through an encoder ring to get 72, which is then converted to binary and put in the boxes

When we want to get it back out of the box, we’d have to put it through a decoder to translate it back to H.

Automatic memory management

When you’re working in JavaScript you don’t actually need to think about this memory. It’s abstracted away from you. This means you don’t touch the memory directly.

Instead, the JS engine acts as an intermediary. It manages the memory for you.

A column of boxes with a rope in front of it and the JS engine standing at that rope like a bouncer

So let’s say some JS code, like React, wants to create a variable.

Same as above, with React asking the JS engine to create a variable

What the JS engine does is run that value through an encoder to get the binary representation of the value.

The JS engine using an encoder ring to convert the string to binary

And it will find space in the memory that it can put that binary representation into. This process is called allocating memory.

The JS engine finding space for the binary in the column of boxes

Then, the engine will keep track of whether or not this variable is still accessible from anywhere in the program. If the variable can no longer be reached, the memory is going to be reclaimed so that the JS engine can put new values there.

The garbage collector clearing out the memory

This process of watching the variables—strings, objects, and other kinds of values that go in memory—and clearing them out when they can’t be reached anymore is called garbage collection.

Languages like JavaScript, where the code doesn’t deal with memory directly, are called memory-managed languages.

This automatic memory management can make things easier for developers. But it also adds some overhead. And that overhead can sometimes make performance unpredictable.

Manual memory management

Languages with manually managed memory are different. For example, let’s look at how React would work with memory if it were written in C (which would be possible now with WebAssembly).

C doesn’t have that layer of abstraction that JavaScript does on the memory. Instead, you’re operating directly on memory. You can load things from memory, and you can store things to memory.

A WebAssembly version of React working with memory directly

When you’re compiling C or other languages down to WebAssembly, the tool that you use will add in some helper code to your WebAssembly. For example, it would add code that does the encoding and decoding bytes. This code is called a runtime environment. The runtime environment will help handle some of the stuff that the JS engine does for JS.

An encoder ring being shipped down as part of the .wasm file

But for a manually managed language, that runtime won’t include garbage collection.

This doesn’t mean that you’re totally on your own. Even in languages with manual memory management, you’ll usually get some help from the language runtime. For example, in C, the runtime will keep track of which memory addresses are open in something called a free list.

A free list next to the column of boxes, listing which boxes are free right now

You can use the function malloc (short for memory allocate) to ask the runtime to find some memory addresses that can fit your data. This will take those addresses off of the free list. When you’re done with that data, you have to call free to deallocate the memory. Then those addresses will be added back to the free list.

You have to figure out when to call those functions. That’s why it’s called manual memory management—you manage the memory yourself.

As a developer, figuring out when to clear out different parts of memory can be hard. If you do it at the wrong time, it can cause bugs and even lead to security holes. If you don’t do it, you run out of memory.

This is why many modern languages use automatic memory management—to avoid human error. But that comes at the cost of performance. I’ll explain more about this in the next article.

Hacks.Mozilla.OrgA cartoon intro to ArrayBuffers and SharedArrayBuffers

This is the 2nd article in a 3-part series:

  1. A crash course in memory management
  2. A cartoon intro to ArrayBuffers and SharedArrayBuffers
  3. Avoiding race conditions in SharedArrayBuffers with Atomics

In the last article, I explained how memory-managed languages like JavaScript work with memory. I also explained how manual memory management works in languages like C.

Why is this important when we’re talking about ArrayBuffers and SharedArrayBuffers?

It’s because ArrayBuffers give you a way to handle some of your data manually, even though you’re working in JavaScript, which has automatic memory management.

Why is this something that you would want to do?

As we talked about in the last article, there’s a trade-off with automatic memory management. It is easier for the developer, but it adds some overhead. In some cases, this overhead can lead to performance problems.

A balancing scale showing that automatic memory management is easier to understand, but harder to make fast

For example, when you create a variable in JS, the engine has to guess what kind of variable this is and how it should be represented in memory. Because it’s guessing, the JS engine will usually reserve more space than it really needs for a variable. Depending on the variable, the memory slot may be 2–8 times larger than it needs to be, which can lead to lots of wasted memory.

Additionally, certain patterns of creating and using JS objects can make it harder to collect garbage. If you’re doing manual memory management, you can choose an allocation and de-allocation strategy that’s right for the use case that you’re working on.

Most of the time, this isn’t worth the trouble. Most use cases aren’t so performance sensitive that you need to worry about manual memory management. And for common use cases, manual memory management may even be slower.

But for those times when you need to work at a low-level to make your code as fast as possible, ArrayBuffers and SharedArrayBuffers give you an option.

A balancing scale showing that manual memory management gives you more control for performance fine-tuning, but requires more thought and planning

So how does an ArrayBuffer work?

It’s basically like working with any other JavaScript array. Except, when using an ArrayBuffer, you can’t put any JavaScript types into it, like objects or strings. The only thing that you can put into it are bytes (which you can represent using numbers).

Two arrays, a normal array which can contain numbers, objects, strings, etc, and an ArrayBuffer, which can only contain bytes

One thing I should make clear here is that you aren’t actually adding this byte directly to the ArrayBuffer. By itself, this ArrayBuffer doesn’t know how big the byte should be, or how different kinds of numbers should be converted to bytes.

The ArrayBuffer itself is just a bunch of zeros and ones all in a line. The ArrayBuffer doesn’t know where the division should be between the first element and the second element in this array.

A bunch of ones and zeros in a line

To provide context, to actually break this up into boxes, we need to wrap it in what’s called a view. These views on the data can be added with typed arrays, and there are lots of different kinds of typed arrays they can work with.

For example, you could have an Int8 typed array which would break this up into 8-bit bytes.

Those ones and zeros broken up into boxes of 8

Or you could have an unsigned Int16 array, which would break it up into 16-bit bites, and also handle this as if it were an unsigned integer.

Those ones and zeros broken up into boxes of 16

You can even have multiple views on the same base buffer. Different views will give you different results for the same operations.

For example, if we get elements 0 & 1 from the Int8 view on this ArrayBuffer, it will give us different values than element 0 in the Uint16 view, even though they contain exactly the same bits.

Those ones and zeros broken up into boxes of 16

In this way, the ArrayBuffer basically acts like raw memory. It emulates the kind of direct memory access that you would have in a language like C.

You may be wondering why don’t we just give programmers direct access to memory instead of adding this layer of abstraction. Giving direct access to memory would open up some security holes. I will explain more about this in a future article.

So, what is a SharedArrayBuffer?

To explain SharedArrayBuffers, I need to explain a little bit about running code in parallel and JavaScript.

You would run code in parallel to make your code run faster, or to make it respond faster to user events. To do this, you need to split up the work.

In a typical app, the work is all taken care of by a single individual—the main thread. I’ve talked about this before… the main thread is like a full-stack developer. It’s in charge of JavaScript, the DOM, and layout.

Anything you can do to remove work from the main thread’s workload helps. And under certain circumstances, ArrayBuffers can reduce the amount of work that the main thread has to do.

The main thread standing at its desk with a pile of paperwork. The top part of that pile has been removed

But there are times when reducing the main thread’s workload isn’t enough. Sometimes you need to bring in reinforcements… you need to split up the work.

In most programming languages, the way you usually split up the work is by using something called a thread. This is basically like having multiple people working on a project. If you have tasks that are pretty independent of each other, you can give them to different threads. Then, both those threads can be working on their separate tasks at the same time.

In JavaScript, the way you do this is using something called a web worker. These web workers are slightly different than the threads you use in other languages. By default they don’t share memory.

Two threads at desks next to each other. Their piles of paperwork are half as tall as before. There is a chunk of memory below each, but not connected to the other's memory

This means if you want to share some data with the other thread, you have to copy it over. This is done with the function postMessage.

postMessage takes whatever object you put into it, serializes it, sends it over to the other web worker, where it’s deserialized and put in memory.

Thread 1 shares memory with thread 2 by serializing it, sending it across, where it is copied into thread 2's memory

That’s a pretty slow process.

For some kinds of data, like ArrayBuffers, you can do what is called transferring memory. That means moving that specific block of memory over so that the other web worker has access to it.

But then the first web worker doesn’t have access to it anymore.

Thread 1 shares memory with thread 2 by transferring it. Thread 1 no longer has access to it

That works for some use cases, but for many use cases where you want to have this kind of high performance parallelism, what you really need is to have shared memory.

This is what SharedArrayBuffers give you.

The two threads get some shared memory which they can both access

With the SharedArrayBuffer, both web workers, both threads, can be writing data and reading data from the same chunk of memory.

This means they don’t have the communication overhead and delays that you would have with postMessage. Both web workers have immediate access to the data.

There is some danger in having this immediate access from both threads at the same time though. It can cause what are called race conditions.

Drawing of two threads racing towards memory

I’ll explain more about those in the next article.

What’s the current status of SharedArrayBuffers?

SharedArrayBuffers will be in all of the major browsers soon.

Logos of the major browsers high-fiving

They’ve already shipped in Safari (in Safari 10.1). Both Firefox and Chrome will be shipping them in their July/August releases. And Edge plans to ship them in their fall Windows update.

Even once they are available in all major browsers, we don’t expect application developers to be using them directly. In fact, we recommend against it. You should be using the highest level of abstraction available to you.

What we do expect is that JavaScript library developers will create libraries that give you easier and safer ways to work with SharedArrayBuffers.

In addition, once SharedArrayBuffers are built into the platform, WebAssembly can use them to implement support for threads. Once that’s in place, you’d be able to use the concurrency abstractions of a language like Rust, which has fearless concurrency as one of its main goals.

In the next article, we’ll look at the tools (Atomics) that these library authors would use to build up these abstractions while avoiding race conditions.

Layer diagram showing SharedArrayBuffer + Atomics as the foundation, and JS libaries and WebAssembly threading building on top

Hacks.Mozilla.OrgAvoiding race conditions in SharedArrayBuffers with Atomics

This is the 3rd article in a 3-part series:

  1. A crash course in memory management
  2. A cartoon intro to ArrayBuffers and SharedArrayBuffers
  3. Avoiding race conditions in SharedArrayBuffers with Atomics

In the last article, I talked about how using SharedArrayBuffers could result in race conditions. This makes working with SharedArrayBuffers hard. We don’t expect application developers to use SharedArrayBuffers directly.

But library developers who have experience with multithreaded programming in other languages can use these new low-level APIs to create higher-level tools. Then application developers can use these tools without touching SharedArrayBuffers or Atomics directly.

Layer diagram showing SharedArrayBuffer + Atomics as the foundation, and JS libaries and WebAssembly threading building on top

Even though you probably shouldn’t work with SharedArrayBuffers and Atomics directly, I think it’s still interesting to understand how they work. So in this article, I’ll explain what kinds of race conditions concurrency can bring, and how Atomics help libraries avoid them.

But first, what is a race condition?

Drawing of two threads racing towards memory

 

Race conditions: an example you may have seen before

A pretty straightforward example of a race condition can happen when you have a variable that is shared between two threads. Let’s say one thread wants to load a file and the other thread checks whether it exists. They share a variable, fileExists, to communicate.

Initially, fileExists is set to false.

Two threads working on some code. Thread 1 is loading a file if fileExists is true, and thread 2 is setting fileExists

As long as the code in thread 2 runs first, the file will be loaded.

Diagram showing thread 2 going first and file load succeeding

But if the code in thread 1 runs first, then it will log an error to the user, saying that the file does not exist.

Diagram showing thread 1 going first and file load failing

But that’s not the problem. It’s not that the file doesn’t exist. The real problem is the race condition.

Many JavaScript developers have run into this kind of race condition, even in single-threaded code. You don’t have to understand anything about multithreading to see why this is a race.

However, there are some kinds of race conditions which aren’t possible in single-threaded code, but that can happen when you’re programming with multiple threads and those threads share memory.

Different classes of race conditions and how Atomics help

Let’s explore some of the different kinds of race conditions you can have in multithreaded code and how Atomics help prevent them. This doesn’t cover all possible race conditions, but should give you some idea why the API provides the methods that it does.

Before we start, I want to say again: you shouldn’t use Atomics directly. Writing multithreaded code is a known hard problem. Instead, you should use reliable libraries to work with shared memory in your multithreaded code.

Caution sign

With that out of the way…

Race conditions in a single operation

Let’s say you had two threads that were incrementing the same variable. You might think that the end result would be the same regardless of which thread goes first.

Diagram showing two threads incrementing a variable in turn

But even though, in the source code, incrementing a variable looks like a single operation, when you look at the compiled code, it is not a single operation.

At the CPU level, incrementing a value takes three instructions. That’s because the computer has both long-term memory and short-term memory. (I talk more about how this all works in another article).

Drawing of a CPU and RAM

All of the threads share the long-term memory. But the short-term memory—the registers—are not shared between threads.

Each thread needs to pull the value from memory into its short-term memory. After that, it can run the calculation on that value in short-term memory. Then it writes that value back from its short-term memory to the long-term memory.

Diagram showing a variable being loaded from memory to a register, then being operated on, and then being stored back to memory

If all of the operations in thread 1 happen first, and then all the operations in thread 2 happen, we will end up with the result that we want.

Flow chart showing instructions happening sequentially on one thread, then the other

But if they are interleaved in time, the value that thread 2 has pulled into its register gets out of sync with the value in memory. This means that thread 2 doesn’t take thread 1’s calculation into consideration. Instead, it just clobbers the value that thread 1 wrote to memory with its own value.

Flow chart showing instructions interleaved between threads

One thing atomic operations do is take these operations that humans think of as being single operations, but which the computer sees as multiple operations, and makes the computer see them as single operations, too.

This is why they’re called atomic operations. It’s because they take an operation that would normally have multiple instructions—where the instructions could be paused and resumed—and it makes it so that they all happen seemingly instantaneously, as if it were one instruction. It’s like an indivisible atom.

Instructions encased in an atom

Using atomic operations, the code for incrementing would look a little different.

Atomics.add(sabView, index, 1)

Now that we’re using Atomics.add, the different steps involved in incrementing the variable won’t be mixed up between threads. Instead, one thread will finish its atomic operation and prevent the other one from starting. Then the other will start its own atomic operation.

Flow chart showing atomic execution of the instructions

The Atomics methods that help avoid this kind of race are:

You’ll notice that this list is fairly limited. It doesn’t even include things like division and multiplication. A library developer could create atomic-like operations for other things, though.

To do that, the developer would use Atomics.compareExchange. With this, you get a value from the SharedArrayBuffer, perform an operation on it, and only write it back to the SharedArrayBuffer if no other thread has updated it since you first checked. If another thread has updated it, then you can get that new value and try again.

Race conditions across multiple operations

So those Atomic operations help avoid race conditions during “single operations”. But sometimes you want to change multiple values on an object (using multiple operations) and make sure no one else is making changes to that object at the same time. Basically, this means that during every pass of changes to an object, that object is on lockdown and inaccessible to other threads.

The Atomics object doesn’t provide any tools to handle this directly. But it does provide tools that library authors can use to handle this. What library authors can create is a lock.

Diagram showing two threads and a lock

If code wants to use locked data, it has to acquire the lock for the data. Then it can use the lock to lock out the other threads. Only it will be able to access or update the data while the lock is active.

To build a lock, library authors would use Atomics.wait and Atomics.wake, plus other ones such as Atomics.compareExchange and Atomics.store. If you want to see how these would work, take a look at this basic lock implementation.

In this case, thread 2 would acquire the lock for the data and set the value of locked to true. This means thread 1 can’t access the data until thread 2 unlocks.

Thread 2 gets the lock and uses it to lock up shared memory

If thread 1 needs to access the data, it will try to acquire the lock. But since the lock is already in use, it can’t. The thread would then wait—so it would be blocked—until the lock is available.

Thread 1 waits until the lock is unlocked

Once thread 2 is done, it would call unlock. The lock would notify one or more of the waiting threads that it’s now available.

Thread 1 is notified that the lock is available

That thread could then scoop up the lock and lock up the data for its own use.

Thread 1 uses the lock

A lock library would use many of the different methods on the Atomics object, but the methods that are most important for this use case are:

Race conditions caused by instruction reordering

There’s a third synchronization problem that Atomics take care of. This one can be surprising.

You probably don’t realize it, but there’s a very good chance that the code you’re writing isn’t running in the order you expect it to. Both compilers and CPUs reorder code to make it run faster.

For example, let’s say you’ve written some code to calculate a total. You want to set a flag when the calculation is finished.

subTotal = price + fee; total += subTotal; isDone = true

To compile this, we need to decide which register to use for each variable. Then we can translate the source code into instructions for the machine.

Diagram showing what that would equal in mock assembly

So far, everything is as expected.

What’s not obvious if you don’t understand how computers work at the chip level (and how the pipelines that they use for executing code work) is that line 2 in our code needs to wait a little bit before it can execute.

Most computers break down the process of running an instruction into multiple steps. This makes sure all of the different parts of the CPU are busy at all times, so it makes the best use of the CPU.

Here’s one example of the steps an instruction goes through:

  1. Fetch the next instruction from memory
  2. Figure out what the instruction is telling us to do (aka decode the instruction), and get the values from the registers
  3. Execute the instruction
  4. Write the result back to the register

Pipeline Stage 1: fetch the instruction
Pipeline Stage 2: decode the instruction and fetch register values
Pipeline Stage 3: Execute the operation
Pipeline Stage 4: Write back the result

So that’s how one instruction goes through the pipeline. Ideally, we want to have the second instruction following directly after it. As soon as it has moved into stage 2, we want to fetch the next instruction.

The problem is that there is a dependency between instruction #1 and instruction #2.

Diagram of a data hazard in the pipeline

We could just pause the CPU until instruction #1 has updated subTotal in the register. But that would slow things down.

To make things more efficient, what a lot of compilers and CPUs will do is reorder the code. They will look for other instructions which don’t use subTotal or total and move those in between those two lines.

Drawing of line 3 of the assembly code being moved between lines 1 and 2

This keeps a steady stream of instructions moving through the pipe.

Because line 3 didn’t depend on any values in line 1 or 2, the compiler or CPU figures it’s safe to reorder like this. When you’re running in a single thread, no other code will even see these values until the whole function is done, anyway.

But when you have another thread running at the same time on another processor, that’s not the case. The other thread doesn’t have to wait until the function is done to see these changes. It can see them almost as soon as they are written back to memory. So it can tell that isDone was set before total.

If you were using isDone as a flag that the total had been calculated and was ready to use in the other thread, then this kind of reordering would create race conditions.

Atomics attempt to solve some of these bugs. When you use an Atomic write, it’s like putting a fence between two parts of your code.

Atomic operations aren’t reordered relative to each other, and other operations aren’t moved around them. In particular, two operations that are often used to enforce ordering are:

All variable updates above Atomics.store in the function’s source code are guaranteed to be done before Atomics.store is done writing its value back to memory. Even if the non-Atomic instructions are reordered relative to each other, none of them will be moved below a call to Atomics.store which comes below in the source code.

And all variable loads after Atomics.load in a function are guaranteed to be done after Atomics.load fetches its value. Again, even if the non-atomic instructions are reordered, none of them will be moved above an Atomics.load that comes above them in the source code.

Diagram showing Atomics.store and Atomics.load maintaining order

Note: The while loop I show here is called a spinlock and it’s very inefficient. And if it’s on the main thread, it can bring your application to a halt. You almost certainly don’t want to use that in real code.

Once again, these methods aren’t really meant for direct use in application code. Instead, libraries would use them to create locks.

Conclusion

Programming multiple threads that share memory is hard. There are many different kinds of race conditions just waiting to trip you up.

Drawing of shared memory with a dragon and "Here be dragons" above

This is why you don’t want to use SharedArrayBuffers and Atomics in your application code directly. Instead, you should depend on proven libraries by developers who are experienced with multithreading, and who have spent time studying the memory model.

It is still early days for SharedArrayBuffer and Atomics. Those libraries haven’t been created yet. But these new APIs provide the basic foundation to build on top of.

The Mozilla BlogMozilla Launches Campaign to Raise Awareness for Internet Health

Today, Mozilla unveils several initiatives including an event focused on Internet Health with special guests DeRay McKesson, Lauren Duca and more, a brand new podcast, new tech to help create a voice database, as well as some local SF pop-ups.

Mozilla is doing this to draw the public’s attention to mounting concern over the consolidation of power online, including the Federal Communications Commission’s proposed actions to kill net neutrality.

New Polling

60 percent of people in the U.S. are worried about online services being owned by a small number of services, according to a new Mozilla/Ipsos poll released today.

“The Internet is a vital tool that touches every aspect of modern life,” said Mark Surman, Mozilla’s Executive Director. “If you care about freedom of speech, economic growth and a level playing field, then you care about guarding against those who would throttle, lock down or monopolize the web as if they owned it.

According to another Mozilla/Ipsos poll, seventy-six percent of people in the U.S. support net neutrality.

“At Mozilla, we’re fueling a movement to ensure the web is something that belongs to all of us. Forever,” Surman added.

“A Night for Internet Health”

On Thursday, June 29, Mozilla will host “A Night for Internet Health” — a free live event featuring prominent thinkers, performers, and political voices discussing power, progress, and life on the Web.

Mozilla will be joined by musician Neko Case, Pod Save the People host DeRay McKesson, Teen Vogue columnist Lauren Duca, comedian Moshe Kasher, tech media personality Veronica Belmont, and Sens. Al Franken and Ron Wyden via video.

The event is from 7-10 p.m. (PDT), June 29 at the SFJazz Center in San Francisco. Tickets will be available through the Center’s Box Office starting on June 15.

Credentials are available for media.

IRL podcast

On June 26, Mozilla will debut the podcast IRL: Because Online Life is Real Life. Host Veronica Belmont will share stories from the wilds of the Web, and real talk about online issues that affect us all.

People can listen to the IRL trailer or pre-subscribe to IRL on Apple Podcasts, Stitcher, Pocket Casts, Overcast, or RadioPublic.

Project Common Voice: The World’s First Crowdsourced Voice Database

Voice-enabled devices represent the next major disruption, but access to databases is expensive and doesn’t include a diverse set of accents and languages. Mozilla’s Project Common Voice aims to solve the problem by inviting people to donate samples of their voices to a massive global project that will allow anyone to quickly and easily train voice-enabled applications. Mozilla will make this resource available to the public later this year.

The project will be featured at guerilla pop-ups in San Francisco, where people can also create custom tote bags or grab a T-shirt that expresses their support for a healthy Internet and net neutrality.

To get started, you can download the Common Voice iOS app and visit the project’s website.

Locations:

Pop-ups:
  • Wednesday, June 28: From noon – 6 p.m. PDT at Justin Herman Plaza in San Francisco.
  • Thursday, June 29: From 7 – 10 at SFJazz in San Francisco.
  • Friday, June 30 – July 1:  From noon – 6 p.m. PDT at Union Square in San Francisco.

SF Take-Over

Beginning on Monday, June 19, Mozilla will launch a provocative advertising campaign across San Francisco and online, highlighting what’s at stake with the attacks on net neutrality and power consolidation on the web.

The advertisements juxtapose opposing messages, highlighting the power dynamics of the Internet and offering steps people can take to create a healthier Internet. For example, one advertisement contrasts “Let’s Kill Innovation” with “Actually, let’s not. Raise your voice for net neutrality.”

San Franciscans and visitors will see the ads across the city and will be placed along Market and Embarcadero Streets, San Francisco Airport, projected on buildings– as well as online, radio, social media and prominent websites.

About Mozilla

Mozilla has been a pioneer and advocate for the open web for more than 15 years. We promote open standards that enable innovation and advance the Web as a platform for all. Today, hundreds of millions of people worldwide use Mozilla Firefox to experience the Web on computers, tablets and mobile devices. For more information, visit www.mozilla.org.

The post Mozilla Launches Campaign to Raise Awareness for Internet Health appeared first on The Mozilla Blog.

Robert O'CallahanNew "rr pack" Command

I think there's huge potential to use rr for debugging cloud services. Apparently right now interactive debugging is mostly not used in the cloud, which makes sense — it's hard to identify the right process to debug, much less connect to it, and even if you could, stopping it for interactive analysis would likely interfere too much with your distributed system. However, with rr you could record any number of process executions without breaking your system, identify the failed runs after the fact, and debug them at your leisure.

Unfortunately there are a couple of problems making that difficult right now. One is that the largest cloud providers don't support the hardware performance counter rr needs. I'm excited to hear that Amazon has recently enabled some HW performance counters on dedicated hosts — hopefully they can be persuaded to add the retired-conditional-branch counter to their whitelist (and someone can fix the Xen PMU virtualization bug that breaks rr). Another problem is that rr's traces aren't easy to move from one machine to another. I've started addressing this problem by implementing a new rr command, rr pack.

There are two problems with rr traces. One is that on filesystems that do not support "reflink" file copies, to keep recording overhead low we sometimes hardlink files into the trace, or for system libraries we just assume they won't change even if we can't hardlink them. This means traces are not fully self-contained in the latter case, and in the former case the recording can be invalidated if the files change. The other problem is that every time an mmap occurs we clone/link a new file into the trace, even if a previous mmap mapped the same file, because we have no fast way of telling if the file has changed or not. This means traces appear to contain large numbers of large files but many of those files are duplicates.

rr pack fixes both of those problems. You run it on a trace directory in-place. It deduplicates trace files by computing a cryptographic hash (BLAKE2b, 256 bits) of each file and keeping only one file for any given hash. It identifies needed files outside the trace directory, and hardlinks to files outside the trace directory, and copies them into the trace directory. It rewrites trace records (the mmaps file) to refer to the new files, so the trace format hasn't changed. You should be able to copy around the resulting trace, and modify any files outside the trace, without breaking it. I tried pretty hard to ensure that interrupted rr pack commands leave the trace intact (using fsync and atomic rename); of course, an interrupted rr pack may not fully pack the trace so the operation should be repeated. Successful rr pack commands are idempotent.

We haven't really experimented with trace portability yet so I can't say how easy it will be to just zip up a trace directory and replay it on a different computer. We know that currently replaying on a machine with different CPUID values is likely to fail, but we have a solution in the works for that — Kyle's patches to add ARCH_SET_CPUID to control "CPUID faulting" are in Linux kernel 4.12 and will let rr record and replay CPUID values.

Air MozillaRust Bay Area Meetup June 2017

Rust Bay Area Meetup June 2017 https://www.meetup.com/Rust-Bay-Area/ Tentative agenda will be: - Andrew Stone from VMWare talking about Haret - William Morgan from Buoyant talking about linkerd-tcp

Sean McArthurhyper v0.11

The async release of hyper is here, version 0.11.0. There’s an updated website, and new guides to try to help you get up to speed with all the changes.

hyper is an HTTP library built in Rust, providing fast and safe client and server implementations.

v0.11

This release marks a form of stability for async hyper. This isn’t saying hyper’s API won’t continue to evolve (and break), but that when such a break happens, it will happen in a v0.12, and the changes will be concentrated. It should be possible to start building frameworks and tools using v0.11.

Even before v0.11 was tagged, many were so excited by the prospects of async hyper, they already are using it. Some examples:

  • sccache has been using hyper’s Client to manage resources in S3.
  • npm uses hyper for their Registry change stream

Async

The biggest deal here, of course, is the switch to non-blocking (or “async”) IO. This has been the push for this release for a long time, and the landscape in the Rust community changed a lot while we were working on this. Last year, a framework for building asynchronous network protocols was released, Tokio. There a lot of great things to say about it, and hyper has embraced it fully.

This means a big change in API.

For instance, Request and Response bodies are no longer used via the std::io::{Read, Write} traits. Instead, bodies are Streams of bytes. Streams are essentially a Future that can resolve multiple times, which matches how an async connection works: bunches of bytes are received at different times.

By integrating with Tokio, hyper and the community gain a lot. Adding in Transport Layer Security is just combining hyper::server::Http with something like tokio_tls::TlsServer. That same TlsServer can be plugged into any protocol, and Http can be wrapped in any other community piece implementing the right trait. The same can be done with other concepts, like generic timeouts.

Hop over to the guides if you’d like to see how to get working examples.

Headers

Being a large breaking change release, an opportunity was taken to refine the headers system in hyper. Some standout changes:

  • A Raw type was added, and the set_raw, get_raw, etc methods now use it. It allows for a more ergonomic way of adding raw header values, and it’s also faster when a Raw in most cases.
  • The HeaderFormat trait has been merged into the Header trait. They were previously separate due to trait object safety rules, but now that trait methods can have a where Self: Sized added, there is no need to separate them.
  • The semantics of Header::fmt_header were clarified. Most of the time, headers can be written on one line. There is the rare exception (technically only Set-Cookie is specified) where each “value” must be on a separate line. Now, fmt_header receives a hyper::header::Formatter, with only a fmt_line method. Pretty much every header can just implement std::fmt::Display, and call f.fmt_line(self), but now Set-Cookie doesn’t need to use a hack to format itself.

Performance

hyper v0.10 was no slouch. It can churn through requests and pump out responses. However, as it uses blocking IO, it hits a problem when you have tons of connections to your server at the same time. Blocking IO means it needs to use multiple threads, only being able to deal with 1 connection per thread. Threads, when you have a lot, get to be expensive.1 So, switching to non-blocking IO means that we keep going fast, but each additional connection isn’t nearly as expensive.

hyper v0.11 is fast2, and handles thousands of connections like a champ.

Changelog

The changes are big. There is a changelog if you want to see all of them. The changelog tries to only contain changes from v0.10, but it’s not exhaustive.

Thanks

There are a lot of people to thank for getting this release out the door. This really is a fantastic community.

Next

hyper is now tracking the Futures and Tokio crates. Work is happening in there as well, as we find patterns and problems that aren’t unique to hyper, and should be available for any async protocol.

There has been community desire (and on the hyper team too!) to stabilize some sort of http crate. This would contain types for handle statuses, methods, versions, and headers, but without client or server or protocol version implementations. We’re trying to find a good design that supports all the possible use cases, and HTTP1 and HTTP2, without sacrificing any performance. Once such a thing exists, hyper would likely replace the types it uses with those.

In doing the above, that may mean that hyper’s current headers system won’t fit. It might make sense to break that out into its own crate, so that people who want typed headers can have them, while a bare bones server could live without them. This would also help reqwest in its road to 1.0, since it publicly exports hyper::headers, but hyper likely won’t reach v1.0 before it.

And of course, we always want to go faster. That will never stop!

v0.11.0

Again, go get it! Read the new guides. Tell us what you think!


  1. hyper uses a set number of threads, not growing as more connections are made. It’s a different trade off, but not too relevant for explaining why non-blocking IO is better. 

  2. hyper doesn’t lead the pack in benchmarks (yet), but it’s not in the back either. The last benchmark put it at 58% requests per second of the fastest. Since that benchmark was published, some significant low-hanging improvements were made. A new preview should be available soon. And we’ll keep going! 

The Mozilla BlogThe Best Firefox Ever

With E10s, our new version of Firefox nails the “just right” balance between memory and speed


On the Firefox team, one thing we always hear from our users is that they rely on the web for complex tasks like trip planning and shopping comparisons. That often means having many tabs open. And the sites and web apps running in those tabs often have lots of things going on– animations, videos, big pictures and more. Complex sites are more and more common. The average website today is nearly 2.5 megabytes – the same size as the original version of the game Doom, according to Wired. Up until now, a complex site in one Firefox tab could slow down all the others. That often meant a less than perfect browsing experience.

To make Firefox run even complex sites faster, we’ve been changing it to run using multiple operating system processes. Translation? The old Firefox used a single process to run all the tabs in a browser. Modern browsers split the load into several independent processes. We named our project to split Firefox into multiple processes ‘Electrolysis ’ (or E10s) after the chemical process that divides water into its core elements. E10s is the largest change to Firefox code in our history. And today we’re launching our next big phase of the E10s initiative.

A Faster Firefox With Four Content Processes

With today’s release, Firefox uses up to four processes to run web page content across all open tabs. This means that a heavy, complex web page in one tab has a much lower impact on the responsiveness and speed in other tabs. By separating the tabs into separate processes, we make better use of the hardware on your computer, so Firefox can deliver you more of the web you love, with less waiting.

I’ve been living with this turned on by default in the pre-release version of Firefox (Nightly). The performance improvements are remarkable. Besides running faster and crashing less, E10S makes websites feel more smooth. Even busy pages, like Facebook newsfeeds, spool out smoothly and cleanly. After making the switch to Firefox with E10s, now I can’t live without it.

Firefox 54 with E10s makes sites run much better on all computers, especially on computers with less memory. Firefox aims to strike the “just right” balance between speed and memory usage. To learn more about Firefox’s multi-process architecture, and how it’s different from Chrome’s, check out Ryan Pollock’s post about the search for the Goldilocks browser.

Multi-Process Without Memory Bloat Firefox Wins Memory Usage Comparison

In our tests comparing memory usage for various browsers, we found that Firefox used significantly less RAM than other browsers on Windows 10, macOS, and Linux. (RAM stands for Random Access Memory, the type of memory that stores the apps you’re actively running.) This means that with Firefox you can browse freely, but still have enough memory left to run the other apps you want to use on your computer.

The Best Firefox Ever

This is the best release of Firefox ever, with improvements that will be very noticeable to even casual users of our beloved browser. Several other enhancements are shipping in Firefox today, and you can visit our release notes to see the full list. If you’re a web developer, or if you’ve built a browser extension, check out the Hacks Blog to read about all the new Web Platform and WebExtension APIs shipping today.

As we continue to make progress on Project Quantum, we are pushing forward in building a completely revamped browser made for modern computing. It’s our goal to make Firefox the fastest and smoothest browser for PCs and mobile devices. Through the end of 2017, you’ll see some big jumps in capability and performance from Team Firefox. If you stopped using Firefox, try it again. We think you’ll be impressed. Thank you and let us know what you think.

The post The Best Firefox Ever appeared first on The Mozilla Blog.