Matthew NoorenbergheFirefox screenshots can now be easily captured and compared in automation

Back in 2013, during Australis[1] development, I created a tool called mozscreenshots. The purpose of this tool was to help detect UI regressions and make it easier to review how the UI looks in various configurations on each of our supported platforms. The main hindrance to its regular usage was that it required developers and designers to install it on each machine, tying it up while the images were captured. The great news is that mozscreenshots is now running in automation on Nightlies and on-demand for try pushes, making it much easier to capture images. Comparing the captured images to a reference/base version can now be done via a web interface which means the images don’t need to be downloaded for review. mozscreenshots has already detected two issues during its first week in automation, and I hope that this tool will improve the quality of desktop Firefox by surfacing UI issues sooner. You can read about it in the documentation on MDN, but here’s a quick start guide:

Change detection via Nightly captures

In order to detect UI regressions, screenshots of some sets run on every Nightly. These are compared to the previous day’s Nightly, using the compare_screenshots tool (which uses ImageMagick’s `compare`). For now I’m manually kicking this off each day but soon I hope to automate this and have it send an email to interested parties for investigation. Currently, only one run of "TabsInTitlebar,Tabs,WindowSize,Toolbars,LightweightThemes" occurs on Nightlies. I plan to add runs with the DevEdition theme, DevTools, and Preferences shortly, as the code for those are already written.

Capturing on Try

You can request screenshots be captured on a Try push for UI review or comparison to a know-good base by requesting the "mochitest-browser-screenshots" test job. You can specify what you would like captured by setting the MOZSCREENSHOTS_SETS environment variable with a comma-separated list of configurations like so: try: -b o -p linux,linux64,macosx64,win32,win64 -t none -u mochitest-browser-screenshots[Ubuntu,10.6,10.10,Windows XP,Windows 7,Windows 8] --setenv MOZSCREENSHOTS_SETS=TabsInTitlebar,WindowSize,LightweightThemes Note that the job is currently Tier 3 and "excluded" on TreeHerder so you will need to toggle both of those filters to see the jobs there with the symbol: "M[Tier-3] (ss)". Unlike Nightlies, Try pushes won’t capture any images by default if MOZSCREENSHOTS_SETS isn’t specified. This avoids capturing images when developers request all mochitest runs, but don’t really want the screenshots. The capturing is implemented as a mochitest-bc test in the "screenshots" subsuite, meaning configurations can use things like BrowserTestUtils and such. See fetch_screenshots if you would like to download the captured images.

Comparing screenshots

The simplest way to compare images is via the web UI at http://screenshots.mattn.ca/compare/ (Example: http://bit.ly/1Qv4uWD ). Simply provide the project and revision of a base push with images, like the Nightly rev. from about:buildconfig, and your new revision, like a try push with some patches to review. In the background, the images are fetched from automation via fetch_screenshots, and then compared using compare_screenshots with the output displayed on the page. The first comparison for a pair of revisions can take several minutes, as around one thousand (5 platforms x 2 revisions x 100 screenshots) images need to be downloaded and compared for the default set of screenshots. The results are cached, so subsequent comparisons for the same revision are much faster.
Example image generated when a change is detected
There are a lot of opportunities for this tool (e.g. pulse integration, notifications, simplified bug filing based on differences, etc.), and I hope to continue to improve the workflow. Please file bugs on the capturing infrastructure blocking the "mozscreenshots" meta bug (1169179) and bugs related to the web UI, comparison and fetching at https://github.com/mnoorenberghe/mozscreenshots/issues. See also the list of issues and ideas at https://public.etherpad-mozilla.org/p/mozscreenshots which haven’t been filed yet. Thank you to everyone who has helped with getting mozscreenshots to this point, specifically Felipe Gomes, Armen Zambrano Gasparnian, Joel Maher, Brian Grinstead, Kit Cambridge, and Justin Dolske. [1] Australis was the code name of the Firefox UI redesign that launched April 29, 2014.

Aaron KlotzNew Mozdbgext Command: !iat

As of today I have added a new command to mozdbgext: !iat.

The syntax is pretty simple:

!iat <hexadecimal address>

This address shouldn’t be just any pointer; it should be the address of an entry in the current module’s import address table (IAT). These addresses are typically very identifiable by the _imp_ prefix in their symbol names.

The purpose of this extension is to look up the name of the DLL from whom the function is being imported. Furthermore, the extension checks the expected target address of the import with the actual target address of the import. This allows us to detect API hooking via IAT patching.

An Example Session

I fired up a local copy of Nightly, attached a debugger to it, and dumped the call stack of its main thread:


 # ChildEBP RetAddr
00 0018ecd0 765aa32a ntdll!NtWaitForMultipleObjects+0xc
01 0018ee64 761ec47b KERNELBASE!WaitForMultipleObjectsEx+0x10a
02 0018eecc 1406905a USER32!MsgWaitForMultipleObjectsEx+0x17b
03 0018ef18 1408e2c8 xul!mozilla::widget::WinUtils::WaitForMessage+0x5a
04 0018ef84 13fdae56 xul!nsAppShell::ProcessNextNativeEvent+0x188
05 0018ef9c 13fe3778 xul!nsBaseAppShell::DoProcessNextNativeEvent+0x36
06 0018efbc 10329001 xul!nsBaseAppShell::OnProcessNextEvent+0x158
07 0018f0e0 1038e612 xul!nsThread::ProcessNextEvent+0x401
08 0018f0fc 1095de03 xul!NS_ProcessNextEvent+0x62
09 0018f130 108e493d xul!mozilla::ipc::MessagePump::Run+0x273
0a 0018f154 108e48b2 xul!MessageLoop::RunInternal+0x4d
0b 0018f18c 108e448d xul!MessageLoop::RunHandler+0x82
0c 0018f1ac 13fe78f0 xul!MessageLoop::Run+0x1d
0d 0018f1b8 14090f07 xul!nsBaseAppShell::Run+0x50
0e 0018f1c8 1509823f xul!nsAppShell::Run+0x17
0f 0018f1e4 1514975a xul!nsAppStartup::Run+0x6f
10 0018f5e8 15146527 xul!XREMain::XRE_mainRun+0x146a
11 0018f650 1514c04a xul!XREMain::XRE_main+0x327
12 0018f768 00215c1e xul!XRE_main+0x3a
13 0018f940 00214dbd firefox!do_main+0x5ae
14 0018f9e4 0021662e firefox!NS_internal_main+0x18d
15 0018fa18 0021a269 firefox!wmain+0x12e
16 0018fa60 76e338f4 firefox!__tmainCRTStartup+0xfe
17 0018fa74 77d656c3 KERNEL32!BaseThreadInitThunk+0x24
18 0018fabc 77d6568e ntdll!__RtlUserThreadStart+0x2f
19 0018facc 00000000 ntdll!_RtlUserThreadStart+0x1b

Let us examine the code at frame 3:


14069042 6a04            push    4
14069044 68ff1d0000      push    1DFFh
14069049 8b5508          mov     edx,dword ptr [ebp+8]
1406904c 2b55f8          sub     edx,dword ptr [ebp-8]
1406904f 52              push    edx
14069050 6a00            push    0
14069052 6a00            push    0
14069054 ff159cc57d19    call    dword ptr [xul!_imp__MsgWaitForMultipleObjectsEx (197dc59c)]
1406905a 8945f4          mov     dword ptr [ebp-0Ch],eax

Notice the function call to MsgWaitForMultipleObjectsEx occurs indirectly; the call instruction is referencing a pointer within the xul.dll binary itself. This is the IAT entry that corresponds to that function.

Now, if I load mozdbgext, I can take the address of that IAT entry and execute the following command:


0:000> !iat 0x197dc59c
Expected target: USER32.dll!MsgWaitForMultipleObjectsEx
Actual target: USER32!MsgWaitForMultipleObjectsEx+0x0

!iat has done two things for us:

  1. It did a reverse lookup to determine the module and function name for the import that corresponds to that particular IAT entry; and
  2. It followed the IAT pointer and determined the symbol at the target address.

Normally we want both the expected and actual targets to match. If they don’t, we should investigate further, as this mismatch may indicate that the IAT has been patched by a third party.

Note that !iat command is breakpad aware (provided that you’ve already loaded the symbols using !bploadsyms) but can fall back to the Microsoft symbol engine as necessary.

Further note that the !iat command does not yet accept the _imp_ symbolic names for the IAT entries, you need to enter the hexadeicmal representation of the pointer.

Eric RahmMemory Usage of Firefox with e10s Enabled

Quick background

With the e10s project full steam ahead, likely to be enabled for many users in mid-2016, it seemed like a good time to measure the memory overhead of switching Firefox from a single-process architecture to a multi-process architecture. The concern here is simple: the more processes we have, the more memory we use. Starting Q4-2015 I began setting up a test to measure the memory usage of Firefox with a variable amount of content processes.

Methodology

For the test I used a slightly modified version of the AWSY framework that I maintain for areweslimyet.com. This test runs through a sample pageset, the same one used in Talos perf testing, in an attempt to simulate a long-lived session.

The steps:

  1. Open Firefox configured to use N content processes.
  2. Measure memory usage.
  3. Open 100 urls in 30 tabs, cycling through tabs once 30 are opened. Wait 10 seconds per tab.
  4. Measure memory usage.
  5. Close all tabs.
  6. Measure memory usage.

For this test I performed two iterations of this, reporting the startup memory usage from the first and the end of test memory usage (TabsOpen, TabsClosed) for the second.

Note: Just summing the total memory usage of each Firefox process is not a useful metric as it will include memory shared between the main process and the content processes. For a more realistic baseline I chose to use a combination of RSS and USS (aka unique set size, private working bytes):

total_memory = RSS(parent_process) + sum(USS(content_processes))

For example if we had:

Process RSS USS
parent 100 50
content_1 90 30
content_2 95 40

total_memory = 100 + 30 + 40

Results

Note on memory checkpoints:

  • Settled: 30 seconds have passed since previous checkpoint.
  • ForceGC: We manually invoked garbage collection.
  • We list the memory usage for each checkpoint using 0, 1, 2, 4, 8 content processes.

Linux

0 1 2 4 8
Start 190 MiB 232 MiB 223 MiB 223 MiB 229 MiB
StartSettled 173 MiB 219 MiB 216 MiB 219 MiB 213 MiB
TabsOpen 457 MiB 544 MiB 586 MiB 714 MiB 871 MiB
TabsOpenSettled 448 MiB 542 MiB 582 MiB 696 MiB 872 MiB
TabsOpenForceGC 415 MiB 510 MiB 560 MiB 670 MiB 820 MiB
TabsClosed 386 MiB 507 MiB 401 MiB 381 MiB 381 MiB
TabsClosedSettled 264 MiB 359 MiB 325 MiB 308 MiB 303 MiB
TabsClosedForceGC 242 MiB 322 MiB 304 MiB 285 MiB 281 MiB

Windows

0 1 2 4 8
Start 172 MiB 212 MiB 207 MiB 204 MiB 213 MiB
StartSettled 194 MiB 236 MiB 234 MiB 232 MiB 234 MiB
TabsOpen 461 MiB 537 MiB 631 MiB 800 MiB 1,099 MiB
TabsOpenSettled 463 MiB 535 MiB 635 MiB 808 MiB 1,108 MiB
TabsOpenForceGC 447 MiB 514 MiB 593 MiB 737 MiB 990 MiB
TabsClosed 429 MiB 512 MiB 435 MiB 333 MiB 347 MiB
TabsClosedSettled 356 MiB 427 MiB 379 MiB 302 MiB 306 MiB
TabsClosedForceGC 342 MiB 392 MiB 360 MiB 297 MiB 295 MiB

OSX

0 1 2 4 8
Start 319 MiB 350 MiB 342 MiB 336 MiB 336 MiB
StartSettled 311 MiB 393 MiB 383 MiB 384 MiB 382 MiB
TabsOpen 889 MiB 1,038 MiB 1,243 MiB 1,397 MiB 1,694 MiB
TabsOpenSettled 876 MiB 977 MiB 1,105 MiB 1,252 MiB 1,632 MiB
TabsOpenForceGC 795 MiB 966 MiB 1,096 MiB 1,235 MiB 1,540 MiB
TabsClosed 794 MiB 996 MiB 977 MiB 889 MiB 883 MiB
TabsClosedSettled 738 MiB 925 MiB 876 MiB 823 MiB 832 MiB
TabsClosedForceGC 621 MiB 800 MiB 799 MiB 755 MiB 747 MiB

Conclusions

Simply put: the more content processes we use, the more memory we use. On the plus side it’s not a 1:1 factor, with 8 content processes we see roughly a doubling of memory usage on the TabsOpenSettled measurment. It’s a bit worse on Windows, a bit better on OSX, but it’s not 8 times worse.

Overall we see a 10-20% increase in memory usage for the 1 content process case (which is what we plan on shipping initially). This seems like a fair tradeoff for potential security and performance benefits, but as we try to grow the number of content processes we’ll need to take another look at where that memory is being used.

For the next steps I’d like to take a look at how our memory usage compares to other browsers. Expect a follow up post on that shortly.

Support.Mozilla.OrgWhat’s up with SUMO – 11th February

Hello, SUMO Nation!

How have you been? Apparently it’s the season to be a bit sick here and there… Hopefully you’re all fine and happy! We wish you a moderately easy-going weekend and dive straight into the most recent updates from the world of SUMO. Let’s go!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

  • Safwan – for his continuous coding awesomeness and working on making our localizers’ lives easier with his magnificent additions to Kitsune – tōmākē anēka dhan’yabāda!

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting…

  • …is happening on Monday the 15th of February – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Developers

Community

Social

Support Forum

  • No updates from the forums… But soon… ;-)

Knowledge Base

Localization

  • The “Save as Draft” feature has been submitted and released, thanks to Safwan, our resident coding superhero :-) Now, when you localize a long article and need to take a break, you can save a “draft” version to finish it later – and it won’t get in the way of other localizers! Documentation about this coming soon.
  • We are deprioritizing Firefox OS content localization for now (more context here). If you have questions about it, let me know.
  • Goals for February can be found on your dashboards – go forth and localize!
  • Spanish, Portuguese, French, Italian, Polish, and German localizers looking for screenshots for the new Firefox for iOS 2.0 content – look no more! They’re here, courtesy of Joni.
  • Do you speak a fringe language or know someone who does? Take a look at this summerlab session in San Sebastian!

Firefox

  • for iOS
    • We’re almost there for 2.0… are you ready, iOS users?

There you go, I hope you enjoyed this brief summary of what’s going on with SUMO. Let me know in the comments! Don’t forget to talk to your friends and family about mzl.la/support, mzl.la/help, and maybe even mzl.la/sumodev :-) See you around!

 

Air MozillaWeb QA Weekly Meeting, 11 Feb 2016

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Mark SurmanMoFo 2016 Goals + KPIs

Earlier this month, we started rolling out Mozilla Foundation’s new strategy. The core goal is to make the health of the open internet a mainstream issue globally. We’re going to do three things to make this happen: shape the agenda; connect leaders; and rally citizens. I provided an overview of this strategy in another post back in December.

goals

As we start rolling out this strategy, one of our first priorities is figuring out how to measure both the strength and the impact of our new programs. A team across the Foundation has spent the past month developing an initial plan for this kind of measurement. We’ve posted a summary of the plan in slides (here) and the full plan (here).

Preparing this plan not only helped us get clear on the program qualities and impact we want to have, it also helped us come with a crisper way to describe our strategy. Here is a high level summary of what we came up with:

1. Shape the agenda

Impact goal: our top priority issues are mainstream issues globally (e.g. privacy).
Measures: citations of Mozilla / MLN members, public opinion

2. Rally citizens

Strength goal: rally 10s of millions of people to take action and change how they — and their friends — use the web.
Measures: # of active advocates, list size

Impact goal: people make better, more conscious choices. Companies and governments react with better products and laws.
Measures: per campaign evaluation, e.g. educational impact or did we defeat bad law?

3. Connect leaders

Strength goal: build a cohesive, world class network of people who care about the open internet.
Measures: network strength; includes alignment, connectivity, reach and size

Impact goal: network members shape + spread the open internet agenda.
Measures: participation in agenda-setting, citations, influence evaluation

Last week, we walked through this plan with the Mozilla Foundation board. What we found: it turns out that looking at metrics is a great way to get people talking about the intersection of high level goals and practical tactics. E.g. we need to be thinking about tools other than email as we grow our advocacy work outside of Europe and North America.

If you’re involved in our community or just following along with our plans, I encourage you to open up the slides and talk them through with some other people. My bet is they will get you thinking in new and creative ways about the work we have ahead of us. If they do, I’d love to hear thoughts and suggestions. Comments, as always, welcome on this post and by email.

The post MoFo 2016 Goals + KPIs appeared first on Mark Surman.

Air MozillaReps weekly, 11 Feb 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Christian HeilmannMaking ES6 available to all with ChakraCore – A talk at JFokus2016

Today I gave two talks at JFokus in Stockholm, Sweden. This is the one about JavaScript and ChakraCore.


Presentation: Making ES6 available to all with ChakraCore
Christian Heilmann, Microsoft

2015 was a year of massive JavaScript innovation and changes. Lots of great features were added to language, but using them was harder than before as not all features are backwards compatible with older browsers. Now browsers caught on and with the open sourcing of ChakraCore you have a JavaScript runtime to embed in your products and reliable have ECMAScript support. Chris Heilmann of Microsoft tells the story of the language and the evolution of the engine and leaves you with a lot of tips and tricks how to benefit from the new language features in a simple way.

I wrote the talk the night before, and thought I structure it the following way:

  • Old issues
  • The learning process
  • The library/framework issue
  • The ES6 buffet
  • Standards and interop
  • Breaking monopolies

Slides

The Slide Deck is available on Slideshare.

Screencast

A screencast of the talk is on YouTube

Resources:

Soledad PenadesScore another one for the web!

Last week I made a quick trip to Spain. It was a pretty early flight and I was quite sleepy and so… I totally forgot my laptop! I indeed thought that my bag felt “a bit weird”, as the laptop makes the back flat (when it’s in the bag), but I was quite zombified, and so I just kept heading to the station.

I realised my laptop wasn’t there by the time I had to take my wallet out to buy a train ticket. You see, TFL have been making a really big noise about the fact that you can now use your Oyster to travel to Gatwick. But they have been very quiet about requiring people to have enough credit in their cards to pay the full amount of the ticket. And since I use “auto top up”, sometimes my card might have £18. Sometimes it won’t, as in this case.

Anyway, I didn’t want to go back for the laptop, as I was going on a short holidays trip, and a break from computers would be good. Except… I did have stuff to do, namely researching for my next trip!

I could use my phone, but I quite dislike using phones for researching trips: the screen is just too small, the keyboard is insufferable, and I want to open many tabs, look at maps, go back and forth, which isn’t easy on a phone, etc. I could also borrow some relative’s laptop… or I could try to resuscitate and old tablet that I hadn’t used since 2013!

It had become faulty at the beginning of 2013, but I thought I had fixed it. But months after, it decided to enter its mad loop of “restart, restart, restart and repeat” during a transatlantic flight. I had to hide it in my bag and let it expire its battery. And then I was very bored during both the rest of the flight, and the flight back, as all my carefully compiled entertainment was on it. Bah! And so I stopped using it and at some point I brought it to Spain, “just in case”.

Who would have guessed I’d end up using it again!?

I first spent about 30 minutes looking for a suitable plug for the charger. This tablet requires 2A and all the USB chargers I could find were 0.35A or 0.5A. The charger only had USA style pins, but that part could be removed, and revealed a “Mickey mouse” connector, or C7/C8 coupler if you want to be absolutely specific. A few years ago you could find plenty of appliances using this connector, but nowadays? I eventually found the charger for an old camera, with one of these cables! So I made a Frankenchargenstein out of old parts. Perfect.

The tablet took a long time to even show the charging screen. After a while I could finally turn it on, and oh wow, Android has changed a lot for the better since 3.1. But even if this tablet could be updated easily, I had no laptop and no will to install developer tools on somebody else’s laptop. So I was stuck in 3.1.

The Play Store behaved weirdly, with random glitches here and there. Many apps would not show any update, as developers have moved on to use newer versions of the SDK in order to use new hardware features and what not, and I don’t blame them, because programming apps that can work with different SDKs and operating system versions in Android is a terribly painful experience. So the easiest way to deal with old hardware or software versions is just not supporting them at all. But this leaves out people using outdated devices.

One of these “discriminatory apps” I wanted to install for my research was a travel app which lets you save stuff you would like to visit, and displays it on a map, which is very convenient for playing it by ear when you’re out and about. Sadly, it did not offer a version compatible with my device.

But I thought: Firefox still works in Android 3.1!

I got it updated to the latest version and opened the website for this app/service, and guess what? I could access the same functionalities I was looking for, via the web.

And being really honest, it was even better than using the app. I could have a tab with the search results, and open the interesting ones in a different tab, then close them when I was done perusing, without losing the scrolling point in the list. You know… like we do with normal websites. And in fact we’re not even doing anything superspecial with the app either. It’s not like it’s a high end game or like it works offline (which it doesn’t). Heck, it doesn’t even work properly when the network is a bit flaky… like most of the apps out there 😛

So sending a huge thanks to all the Firefox for Android team for extending the life of my ancient device, and a sincere message to app makers: make websites, not apps 😉

flattr this!

Air MozillaQuality Team (QA) Public Meeting, 10 Feb 2016

Quality Team (QA) Public Meeting The bi-monthly status review of the Quality team at Mozilla. We showcase new and exciting work on the Mozilla project, highlight ways you can get...

Air MozillaThe Joy of Coding - Episode 44

The Joy of Coding - Episode 44 mconley livehacks on real Firefox bugs while thinking aloud.

Chris H-CSSE2 Support in Firefox Users

Let me tell you a story.

Intel invented the x86 assembly language back in the Dark Ages of the Late 1970s. It worked, and many CPUs implemented it, consolidating a fragmented landscape into a more cohesive and compatible whole. Unfortunately, x86 had limitations, so in time it would have to go.

Lo, the time came in the Middle Ages of the Mid 1980s when x86 had to be replaced with something that could handle 32-bit widths for numbers and addresses. And more registers. And yet more addressing modes.

But x86 was popular, so Intel didn’t replace it. Instead they extended it with something called IA-32. And it was popular as well, not least because it was backwards-compatible with basic x86: all of the previous x86 programs would work on x86 + IA-32.

By now, personal and business computing was well in the mainstream. This means Intel finally had some data on what, at the lowest level, programmers were wanting to run on their chips.

It turns out that most of the heaviest computations people wanted to do on computers were really simple to express: multiply this list of numbers by a number, add these two lists of numbers together… spreadsheet sorts of things. Finance sorts of things.

But also video games sorts of things. Windows 95 released with DirectX and unleashed a flood of computer gaming. To the list we can now add: move every point and pixel of this 3D model forward by one step, transform all of this geometry and these textures from this camera POV to that one, recolour this sprite’s pixels to be darker to account for shade.

The structure all of these (and a lot of other) tasks had in common was that they all wanted to do one thing (multiply, add, move, transform, recolour) over multiple pieces of data (one list of numbers, multiple lists of numbers, points and pixels, geometry and textures, sprite colours).

SIMD stands for Single Instruction Multiple Data and is how computer engineers describe these sorts of “do one action over and over again to every individual element in this list of data” operations.

So, for Intel’s new flagship “Pentium” processor they were releasing in 1997 they introduced a new extension: MMX (which doesn’t stand for anything. They apparently chose the letters because they looked cool). MMX lets you do some of those SIMD things directly at the lowest level of the computer with the limitation that you can’t also be performing high-precision math at the same time.

AMD was competing with Intel. Not happy with the limitations of the MMX extension, they developed their own x86 extension “3DNow!” which performed the same operations, but without the limitations and with higher precision.

Intel retaliated with SSE: Streaming SIMD Extensions. They shipped it on their Pentium III processors starting in ’99. It wasn’t a full replacement for MMX, though, so they had to quickly follow it up in the Pentium 4.

Which finally brings us to SSE2. First released in 2001 in the Pentium 4 line of processors (also implemented by AMD two years later in their Opteron line), it reimplemented MMX’s capabilities without its shortcomings (and added some other capabilities at the same time).

So why am I talking ancient history? 2001 was fifteen years ago. What use do we have for this lesson on SSE2 when even SSE4 has been around since ’07, and AVX-512 will ship on real silicon within months?

Well, it turns out that Firefox doesn’t assume you have SSE2 on your computer. It can run on fifteen-year-old hardware, if you have it.

There are some code paths that benefit strongly from the ability to run the SIMD instructions present in SSE2. If Mozilla can’t assume that everyone running Firefox has a computer capable of running SSE2, Firefox has to detect, at runtime, whether the user’s computer is capable of using that fast path.

This makes Firefox bigger, slower, and harder to test and maintain.

A question came up on the dev-platform mailing list about how many Firefox users are actually running computers that lack SSE2. I live in a very rich country and have a very privileged job. Any assumption I make about who does and does not have the ability to run computers that are newer than fifteen years old is going to be clouded by biases I cannot completely account for.

So we turn to the data. Which means Telemetry. Which means I get to practice writing custom analyses. (Yay!)

It turns out that, if a Firefox user has Telemetry enabled, we ask that user’s computer about a lot of environmental information. What is your operating system? What version? How much RAM do you have installed? What graphics card do you have? What version is its driver?

And, yes: What extensions does your CPU support?

We collect this information to determine from real users machines whether a particular environmental variable makes Firefox behave poorly. In the not-too-distant past there was a version of Firefox that would just appear black. No reason, no recourse, no explanation. By examining environmental data we were able to track down what combination of graphics cards and driver versions were susceptible to this and develop a fix within days.

(If you want to see an application of this data yourself, here is a dashboard showing the population breakdown of Firefox users. You can use it to see how much of the Firefox user base is like you. For me, less than 1% of the user base was running a computer like mine with a Firefox like mine, reinforcing that what I might think makes sense may not exactly be representative of reality for Firefox users.)

So I asked of the data: of all the users reporting Telemetry on Thursday January 21, 2016, how many have SSE2 capability on their CPUs?

And the answer was: about 99.5%

This would suggest that at most 0.5% of the Firefox release population are running CPUs that do not have SSE2. This is not strictly correct (there are a variety of data science reasons why we cannot prove anything about the population that doesn’t report SSE2 capability), but it’s a decent approximation so let’s go with it.

From there, as with most Telemetry queries, there were more questions. The first was: “Are the users not reporting SSE2 support keeping themselves on older versions of Firefox?” This is a good question because, if the users are keeping themselves on old versions, we can enable SSE2 support in new versions and not worry about the users being unable to upgrade because they already chose not to.

Turns out, no. They’re not.

With such a small population we’re subdividing (0.5%) it’s hard to say anything for certain, but it appears as though they are mostly running up-to-date versions of Firefox and, thus, would be impacted by any code changes we release. Ah, well.

The next questions were: “We know SSE2 is required to run Windows 8. Are these users stuck on Windows XP? Are there many Linux users without SSE2?”

Turns out: yes and no. Yes, they almost all are on Windows XP. No, basically none of them are running Linux.

Support and security updates for Windows XP stopped on April 8, 2014. It probably falls under Mozilla’s mission to try and convince users still running XP to upgrade themselves if possible (as they did on Data Privacy Day), to improve the security of the Internet and to improve those users’ access to the Web.

If you are running Windows XP, or administer a family member’s computer who is, you should probably consider upgrading your operating system as soon as you are able.

If you are running an older computer and want to know if you might not have SSE2, you can open a Firefox tab to about:telemetry and check the Environment section. Under system should be a field “cpu.extensions” that will contain the token “hasSSE2” if Firefox has detected that you have SSE2.

(If about:telemetry is mostly blank, try clicking on the ‘Change’ links at the top labelled “FHR data upload is disabled” and “Extended Telemetry recording is disabled” and then restarting your Firefox)

SSE2 will probably be coming soon as a system requirement for Firefox. I hope all of our users are ready for when that happens.

:chutten


Luis VillaReinventing FOSS user experiences: a bibliography

There is a small genre of posts around re-inventing the interfaces of popular open source software; I thought I’d collect some of them for future reference:

Recent:

Older:

The first two (Drupal, WordPress) are particularly strong examples of the genre because they directly grapple with the difficulty of change for open source projects. I’m sure that early Firefox and VE discussions also did that, but I can’t find them easily – pointers welcome.

Other suggestions welcome in comments.

Robert O'CallahanIntroducing rr Chaos Mode

Most of my rr talks start with a slide showing some nondeterministic failures in test automation while I explain how hard it is to debug such failures and how record-and-replay can help. But the truth is that until now we haven't really used rr for that, because it has often been difficult to get nondeterministic failures in test automation to show up under rr. So rr's value has mainly been speeding up debugging of failures that were relatively easy to reproduce. I guessed that enhancing rr recording to better reproduce intermittent bugs is one area where a small investment could quickly pay off for Mozilla, so I spent some time working on that over the last couple of months.

Based on my experience fixing nondeterministic Gecko test failures, I hypothesized that our nondeterministic test failures are mainly caused by changes in scheduling. I studied a particular intermittent test failure that I introduced and fixed, where I completely understood the bug but the test had only failed a few times on Android and nowhere else, and thousands of runs under rr could not reproduce the bug. Knowing what the bug was, I was able to show that sleeping for a second at a certain point in the code when called on the right thread (the ImageBridge thread) at the right moment would reproduce the bug reliably on desktop Linux. The tricky part was to come up with a randomized scheduling policy for rr that would produce similar results without prior knowledge of the bug.

I first tried the obvious: allow the lengths of timeslices to vary randomly; give threads random priorities and observe them strictly; reset the random priorities periodically; schedule threads with the same priority in random order. This didn't work, for an interesting reason. To trigger my bug, we have to avoid scheduling the ImageBridge thread while the main thread waits for a 500ms timeout to expire. During that time the ImageBridge thread is the only runnable thread, so any approach that can only influence which runnable thread to run next (e.g. CHESS) will not be able to reproduce this bug.

To cut a long story short, here's an approach that works. Use just two thread priorities, "high" and "low". Make most threads high-priority; I give each thread a 0.1 probability of being low priority. Periodically re-randomize thread priorities. Randomize timeslice lengths. Here's the good part: periodically choose a short random interval, up to a few seconds long, and during that interval do not allow low-priority threads to run at all, even if they're the only runnable threads. Since these intervals can prevent all forward progress (no control of priority inversion), limit their length to no more than 20% of total run time. The intuition is that many of our intermittent test failures depend on CPU starvation (e.g. a machine temporarily hanging), so we're emulating intense starvation of a few "victim" threads, and allowing high-priority threads to wait for timeouts or input from the environment without interruption.

With this approach, rr can reproduce my bug in several runs out of a thousand. I've also been able to reproduce a top intermittent (now being fixed), an intermittent shutdown hang in IndexedDB we've been chasing for a while, and at least one other person has found this enabled reproducing their bug. I'm sure there are still bugs this approach can't reproduce, but it's good progress.

I just landed all this work on rr master. The normal scheduler doesn't do this randomization, because it reduces throughput, i.e. slows down recording for easy-to-reproduce bugs. Run rr record -h to enable chaos mode for hard-to-reproduce bugs.

I'm very interested in studying more cases where we figure out a bug that rr chaos mode was not able to reproduce, so I can extend chaos mode to find such bugs.

Robert O'Callahanrr Talk At linux.conf.au

For the last few days I've been attending linux.conf.au, and yesterday I gave a talk about rr. The talk is now online. It was a lot of fun and I got some good questions!

David LawrenceHappy BMO Push Day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1245003] increase the apache sizelimit used by the taskcluster image
  • [1246864] Unable to comment tickets with “WONTFIX” status without change the status on the experimental UI

discuss these changes on mozilla.tools.bmo.


Michael KaplyMac Admin and Developer Conference UK Presentation

I did a presentation today at the Mac Admin and Developer Conference UK on configuring Firefox.

I’m making the slides available now, and will link to the video when it is available.

Also, here is a link to the demo AutoConfig file.

The Mozilla BlogThe Internet is a Global Public Resource

One of the things that first drew me to Mozilla was this sentence from our manifesto:

“The Internet is a global public resource that must remain open and accessible to all.”

These words made me stop and think. As they sunk in, they made me commit.

I committed myself to the idea that the Internet is a global public resource that we all share and rely on, like water. I committed myself to stewarding and protecting this important resource. I committed myself to making the importance of the open Internet widely known.

When we say, “Protect the Internet,” we are not talking about boosting Wi-fi so people can play “Candy Crush” on the subway. That’s just bottled water, and it will very likely exist with or without us. At Mozilla, we are talking about “the Internet” as a vast and healthy ocean.

We believe the health of the Internet is an important issue that has a huge impact on our society. An open Internet—one with no blocking, throttling, or paid prioritization—allows individuals to build and develop whatever they can dream up, without a huge amount of money or asking permission. It’s a safe place where people can learn, play and unlock new opportunities. These things are possible because the Internet is an open public resource that belongs to all of us.

Making the Internet a Mainstream Issue

Not everyone agrees that the health of the Internet is a major priority. People think about the Internet mostly as a “thing” other things connect to. They don’t see the throttling or the censorship or the surveillance that are starting to become pervasive. Nor do they see how unequal the benefits of the Internet have become as it spreads across the globe. Mozilla aims to make the health of the Internet a mainstream issue, like the environment.

Consider the parallels with the environmental movement for a moment. In the 1950s, only a few outdoor enthusiasts and scientists were talking about the fragility of the environment. Most people took clean air and clean water for granted. Today, most of know we should recycle and turn out the lights. Our governments monitor and regulate polluters. And companies provide us with a myriad of green product offerings—from organic food to electric cars.

But this change didn’t happen on its own. It took decades of hard work by environmental activists before governments, companies and the general public took the health of the environment seriously as an issue. This hard work paid off. It made the environment a mainstream issue and got us all looking for ways to keep it healthy.

When in comes to the health of the Internet, it’s like we’re back in the 1950s. A number of us have been talking about the Internet’s fragile state for decades—Mozilla, the EFF, Snowden, Access, the ACLU, and many more. All of us can tell a clear story of why the open Internet matters and what the threats are. Yet we are a long way from making the Internet’s health a mainstream concern.

We think we need to change this, so much so that it’s now one of Mozilla’s explicit goals.

Read Mark Surman’s “Mozilla Foundation 2020 Strategy” blog post.

Starting the Debate: Digital Dividends

The World Bank’s recently released “2016 World Development Report” shows that we’re making steps in the right direction. Past editions have focused on major issues like  “jobs.” This year the report focuses directly on “digital dividends” and the open Internet.

According to the report, the benefits of the Internet, like inclusion, efficiency, and innovation, are unequally spread. They could remain so if we don’t make the Internet “accessible, affordable, and open and safe.” Making the Internet accessible and affordable is urgent. However,

“More difficult is keeping the internet open and safe. Content filtering and censorship impose economic costs and, as with concerns over online privacy and cybercrime, reduce the socially beneficial use of technologies. Must users trade privacy for greater convenience online? When are content restrictions justified, and what should be considered free speech online? How can personal information be kept private, while also mobilizing aggregate data for the common good? And which governance model for the global internet best ensures open and safe access for all? There are no  simple answers, but the questions deserve a vigorous global debate.”

—”World Development Report 2016: Main Messages,” p.3

We need this vigorous debate. A debate like this can help make the open Internet an issue that is taken seriously. It can shape the issue. It can put it on the radar of governments, corporate leaders and the media. A debate like this is essential. Mozilla plans to participate and fuel this debate.

Creating A Public Conversation

Of course, we believe the conversation needs to be much broader than just those who read the “World Development Report.” If we want the open Internet to become a mainstream issue, we need to involve everyone who uses it.

We have a number of plans in the works to do exactly this. They include collaboration with the likes of the World Bank, as well as our allies in the open Internet movement. They also include a number of experiments in a.) simplifying the “Internet as a public resource” message and b.) seeing how it impacts the debate.

Our first experiment is an advertising campaign that places the Internet in a category with other human needs people already recognize: Food. Water. Shelter. Internet. Most people don’t think about the Internet this way. We want to see what happens when we invite them to do so.

The outdoor campaign launches this week in San Francisco, Washington and New York. We’re also running variations of the message through our social platforms. We’ll monitor reactions to see what it sparks. And we will invite conversation in our Mozilla social channels (Facebook & Twitter).

Billboard_Food-Shelter-Water_Red

Billboard_Food-Shelter-Water_Blue

Fueling the Movement

Of course, billboards don’t make a movement. That’s not our thinking at all. But we do think experiments and debates matter. Our messages may hit the mark with people and resonate, or it may tick them off. But our goal is to start a conversation about the health of the Internet and the idea that it’s a global resource that needs protecting.

Importantly, this is one experiment among many.

We’re working to bolster the open Internet movement and take it mainstream. We’re building easy encryption technology with the EFF (Let’s Encrypt). We’re trying to make online conversation more inclusive and open with The New York Times and The Washington Post (Coral Project). And we’re placing fellows and working on open Internet campaigns with organizations like the ACLU, Amnesty International, and Freedom of the Press Foundation (Open Web Fellows Program). The idea is to push the debate on many fronts.

About the billboards, we want to know what you think:

  • Has the time come for the Internet to become a mainstream concern?
  • Is it important to you?
  • Does it rank with other primary human needs?

I’m hoping it does, but I’m also ready to learn from whatever the results may tell us. Like any important issue, keeping the Internet healthy and open won’t happen by itself. And waiting for it to happen by itself is not an option.

We need a movement to make it happen. We need you.

The Mozilla BlogMartin Thomson Appointed to the Internet Architecture Board

Standards are a key part of keeping the Open Web open. The Web runs on standards developed mainly by two standards bodies: the World Wide Web Consortium (W3C), which standardizes HTML and Web APIs, and the Internet Engineering Task Force (IETF), which standardizes networking protocols, such as HTTP and TLS, the core transport protocols for the Web. I’m pleased to announce that Martin Thomson, from the CTO group, was recently appointed to the Internet Architecture Board (IAB), the committee responsible for the architectural oversight of the IETF standards process.

Martin’s appointment recognizes a long history of major contributions to the Internet standards process: including serving as editor for HTTP/2, the newest and much improved version of HTTP, helping to design, implement, and document WebPush, which we just launched in Firefox, and playing major roles in WebRTC, TLS and Geolocation. In addition to his standards work, Martin has committed code all over Gecko, in areas ranging from the WebRTC stack to NSS. Serving on the IAB will give Martin a platform to do even greater things for the Internet and the Open Web as a whole.

Please join me in congratulating Martin.

Jorge VillalobosWebExtensions presentation at FOSDEM 2016

Last week, a big group of Mozillians converged in Brussels, Belgium for FOSDEM 2016. FOSDEM is a huge free and open source event, with thousands of attendees. Mozilla had a stand and a “dev room” for a day, which is a room dedicated to Mozilla presentations.

This year I attended for the first time, and I gave a presentation titled Building Firefox Add-ons with WebExtensions. The presentation covers some of the motivations behind the new API. I also spent a little time going over one of the WebExtensions examples on MDN. I only had 30 minutes for the whole talk, so it was all covered fairly quickly.

The presentation went well, and there were lots of people showing interest and asking questions. I felt that for all of the Mozilla presentations I attended, which makes me want to kick myself for not trying to go to FOSDEM before. It’s a great venue to discuss our ideas, and I want us to come back and do more. We have lots of European contributors and have been looking for a good venue where to have a meetup. This looks ideal, so maybe next year ;).

The Servo BlogThis Week In Servo 50

In the last week, we landed 113 PRs in the Servo organization’s repositories.

Alan Jeffrey has been made a reviewer! We look forward to his help with the huge PR backlog :-)

Notable Additions

  • larsberg moved our Linux builds onto reserved EC2 instances. Same cost, way more availability!
  • nox removed the in-tree version of HeapSizeOf
  • kichjang disabled some of our worst intermittents while we investigate the cause
  • manish made WebSockets work in a worker scope
  • aneeshusa fixed our Mac builder provisioning code around the multiple versions of autoconf required by our dependencies
  • bholley continued his work to refactor the Servo style system code for easier uplifting into Gecko
  • ajeffrey released v0.1.0 of the parsell parser combinator library

New Contributors

Screenshot

No screenshot this week.

Meetings

We had a meeting on final updates to our changed meeting times, the status of the stylo work, and the incoming WebRender PR to master.

Florian QuèzeProject ideas wanted for Summer of Code 2016

Google is running Summer of Code again in 2016. Mozilla has had the pleasure of participating many years so far, and even though we weren't selected last year, we are hoping to participate again this year. In the next few weeks, we need to prepare a list of suitable projects to support our application.

Can you think of a 3-month coding project you would love to guide a student through? This is your chance to get a student focusing on it for 3 months! Summer of Code is a great opportunity to introduce new people to your team and have them work on projects you care about but that aren't on the critical path to shipping your next release.

Here are the conditions for the projects:

  • completing the project should take roughly 3 months of effort for a student;
  • any part of the Mozilla project (Firefox, Firefox OS, Thunderbird, Instantbird, SeaMonkey, Bugzilla, L10n, NSS, IT, and many more) can submit ideas, as long as they require coding work;
  • there is a clearly identified mentor who can guide the student through the project.

If you have an idea, please put it on the Brainstorming page, which is our idea development scratchpad. Please follow the instructions at the top.

The deadline to submit project ideas and help us be selected by Google is February 19th.

Note for students: the student application period starts on March 14th, but the sooner you start discussing project ideas with potential mentors, the better.

Nikki BeeOkay, but What Does Your Work Actually Mean, Nikki? Part 3: Translating A Standard Into Code

Over my previous two posts, I described my introduction to work on Servo, and my experience with learning and modifying the Fetch Standard. Now I’m going to combine these topics today, as I’ll be talking about what it’s like putting the Fetch Standard into practice in Servo. The process is roughly: I pick a part of the Fetch Standard to implement or improve; I write it on a step-by-step basis, often asking many questions; then when I feel it’s mostly complete, I submit my code for review and feedback.

I will talk about the review aspect in my next post, along with other things, as this entry ended up being pretty long!

Where To Start?

Whenever I realize I’m not sure what to be doing for the day, I go over my list of tasks, often talking with my project mentor about what I can do next. There’s a lot more work then I could manage in any internship - especially a 3 month long one - so having an idea of what aspects are the most important is good to keep in mind. Plus, I’m not equally skilled or knowledgeable about every aspect of Fetch or programming in Rust, and learning a new area more than halfway through my internship could be a significant waste of time. So, the main considerations are: “How important is this for Servo?”, “Will it take too long to implement?”, and “Do I know how to do this?”.

“How important is this for Servo?”

Often, my Servo mentor or co-workers are the only people who can answer “How important is this?”, since they’ve all been with Servo for much longer than me, and take a broader level view- personally, I only think of Servo in terms of the Fetch implementation, which isn’t too far off from reality: the Fetch implementation will be used by a number of parts of Servo which can easily use any of it, with clear boundaries for what should be handled by Fetch itself.

I’m not too concerned with what’s most important for the Fetch implementation, since I can’t answer it by myself. There’s always multiple things I could be doing, and I have a better idea for answering the other two aspects.

“Will it take too long to implement?”

“Will it take too long to implement?” is probably the hardest question, but one that gets just a bit easier all the time. Simply put, the more code I write, the better I can predict how long any specific task will take me to accomplish. There are always sort of random chances though: sometimes I run into surprising blocks for a single line of code; or I spend just half a day writing an entire Fetch step with no problems. Maybe with years of experience I will see those easy successes or hard failures coming as well, but for now, I’ll have to be content with rough estimates and a lenient time schedule.

“Do I know how to do this?”

The last question, “Do I know how to do this?”, depends greatly on what “this” is for me to be able to answer. Learning new aspects of Rust is always a closed book to me, in a way- I don’t know how difficult or simple any aspect of it will be to learn. Not to mention, just reading something has a minimal effect on my understanding. I need to put it into practice, multiple times, for me to really understand what I’m doing.

Unfortunately, programming Servo (or any project, really) doesn’t necessarily line up the concepts I need to learn and use in a nice order, so I often need to juggle multiple concepts of varying complexity at once. For areas not specific to Rust though, I can often have a better idea of. Generalized programming ideas though, I can better gauge my ability for. Writing tests? Sure, I’ve done that plenty- it shouldn’t be difficult to apply that to Rust. Writing code that handles multiple concurrent threads? I’ve only learned enough to know that those buzzwords mean something- I’d probably need a month to be taught it well!

An Example Of Deciding A Task

Right now, and for the past while, my work on Servo has been focused on writing tests to make sure my implementation of Fetch, and the functions written previously by my co-workers, conform to what’s expected by the Fetch Standard. What factors in to deciding this is a good task though?

Importance

Most of the steps for Fetch are mostly complete by now. The steps that aren’t coded either cannot be done yet in Servo, or are not necessary for a minimally working Fetch protocol. Sure, I can make the Rust compiler happy- but just because I can run the code at all doesn’t mean it’s working right! Thus, before deciding that the basics of Fetch have been perfected and can be built on, extensive test coverage of the code is significantly important. Testing the code means I can intentionally create many situations, both to make sure the result matches the standard, and that errors come up at only the defined moments.

Time

Writing the tests is often straightforward. I just need to add a lot of conditionals, such as: the Fetch Standard says a basic filtered response has the type “basic”. That’s simple- I need to have the Fetch protocol return a basic filtered response, then verify that the type is “basic”! And so on for every declaration the Fetch Standard makes. The trickier side of this though is that I can’t be absolutely sure, until I run a test, whether or not the existing code supports this.

It’s simple on paper to return a basic filtered response- but when I tell my Fetch implementation to do so, maybe it’ll work right away, or maybe I’m missing several steps or made mistakes that prevent it from happening! The time is a bit open ended as a result, but that can be weighed with how good it would be to catch and repair a significant error.

Knowledge

I have experience with writing tests, as I like them conceptually very much. Often, testing a program by hand is slow, difficult, and hard to reproduce errors. When the testing itself is written in code, everything is accessible and easily repeatable. If I don’t know what’s going on, I can update the test code to be more informative and run it again, or dig into it with a debugger. So I have the knowledge of tests themselves- but what about testing in Rust, or even Servo?

I can tell you the answer: I hadn’t foreseen much difficulty (reading over how to write tests in Rust was easy), but I ended up lacking a lot of experience with testing Servo. Since Servo is a browser engine, and the Fetch protocol deals with fetching any kind of resource a browser access, I need to handle having a resource on a running server for Fetch to retrieve. While this is greatly simplified thanks to some of the libraries Servo works with, it still took a lot of time for to have a good mental model to understand what I was doing in my head and thus, be able to write effective tests.

Actually Writing The Code

So I’ve said a lot about what goes into picking a task, but what about actually writing the code? That requires knowing how to a translate a step from the programming language-abstracted Fetch Standard into Rust code. Sometimes this is almost exactly like the original writing, such as step 1 of the Main Fetch function, “Let response be null”, which in Rust looks like this: let response = None;.

In Rust, the let keyword makes a variable binding- it’s just a coincidence that the Main Fetch step uses the same word. And Rust’s null operator is called None (which declares a variable that is currently holding nothing, and cannot be used for anything while still None, but it sets aside some memory now for response to be used later).

A More Complex Step

Of course, not every step is so literal to translate to Rust. Take step 10 of Main Fetch for instance: “If main fetch is invoked recursively, return response”. The first part of this is knowing what it’s saying, which is “if main fetch is invoked by itself or another function it invokes (ie., invoked recursively), return the response variable”. Translating that to Rust code gives us if main fetch is invoked recursively { return response }. This isn’t very good- main fetch is invoked recursively isn’t valid code, it’s a fragment of an English sentence.

The step doesn’t answer this for me, so I need to get answers elsewhere. There’s two things I can do: keep reading more of the Fetch Standard (and check personal notes I’ve made on it), or ask for help. I’ll do both, in that order. Right now, I have two questions I need answers to: “When is Main Fetch invoked recursively?”, and “How can I tell when Main Fetch is invoked recursively?”.

Doing My Own Research

I often like to try to get answers myself before asking other people for help (although sometimes I do both at the same time, which has lead to me answering my own questions immediately after asking a few times). I think it’s a good ideal to spend a few minutes trying to learn something on my own, but to also not let myself be stuck on any one problem for about 15 minutes, and 30 minutes at the absolute worst. I want to use my own time well- asking a question I can answer on my own shortly isn’t necessary, but not asking a question that is beyond my capability can end up wasting a lot more time.

So I’ll start trying to figure out this step by answering for myself, “When is Main Fetch invoked recursively?”. Most functions in the Fetch Standard invoke at least one other function, which is how it all works- each part is separated so they can be understood in smaller chunks, and can easily repeat specific steps, such as invoking Main Fetch again.

What I would normally need to do here is read through the Fetch Standard to find at least one point where Main Fetch is invoked from itself or another function it invokes. Thankfully, I don’t have to go read through each of those two functions, and everything they call, and so on, until I happen to get my answer, because of part of my notes I took earlier, when I was reading through as much of the Fetch Standard as I could.

I had decided to make short notes declaring when each Fetch function calls another one, and in what step it happens. At the time I did so to help give me an understanding of the relations between all the Fetch functions- now, it’s going to help me pinpoint when Main Fetch is invoked recursively! First I look for what calls Main Fetch, other than the initial Fetch function which primarily serves to set up some basic values, then pass it on to Main Fetch. The only function that does so is HTTP Redirect Fetch, in step 15. I can also see that HTTP Fetch calls HTTP Redirect Fetch in step 5, and that Main Fetch calls HTTP Fetch in step 9.

That was easy! I’ve now answered the question: “Main Fetch is invoked recursively by HTTP Redirect Fetch.”

Questions Are Welcome

However, I’m still at a loss for answering “How can I tell when Main Fetch is invoked recursively?”. Step 15 of HTTP Redirect Fetch doesn’t say to set any variable that would say “oh yeah, this is a recursive call now”. In fact, no such variable is defined by the Fetch Standard to be used!* So, I’ll ask my Servo mentor on IRC.

This example I’m covering actually happened a month or so ago, so I’m not going to pretend I can remember exactly what the conversation was. But the result of it was that the solution is actually very simple: I add a new boolean (a variable that is just true, or false) parameter (a variable that must be sent to a function when invoking it) to Main Fetch that’ll say whether or not it’s being invoked recursively. When Main Fetch is invoked from the Fetch function, I set it to false; when Main Fetch is invoked from HTTP Redirect Fetch, I set it to true.

Repeat

This is the typical process for every single function and step in the Fetch Standard. For every function I might implement, improve, or test, I repeat a similar decision process. For every step, I repeat a similar question-answer procedure, although unfortunately not all English to code translations are so shortly resolved as the examples in this post.

Sometimes picking a new part of the Fetch Standard to implement means it ends up relying on another piece, and needs to be put on hold for that, or occasionally my difficulty with a step might result in a request for change to the Fetch Standard to improve understanding or logic, as I’ve described in previous posts.

This, in sum with the other two parts of this series, effectively describes the majority of my work on Servo! Hopefully, you now have a better idea of what it all means.

Fourth post ending.

  • After sharing a first draft of my post, it was pointed out that adding this to the Fetch spec would be simple to do, so this case will likely soon become irrelevant for understanding Fetch. But it’s still a neat, concise example, if easily outdated.

Air MozillaSuMo weekly community call

SuMo weekly community call The Sumo (Support Mozilla)community meet every Monday in the SuMo vidyo channel meetings are about 30 minutes and start at 17:00 UTC

Brian KingConnected Devices at the Singapore Leadership Summit

Preface

Around 150 Mozillians gathered in Singapore for education, training, skills building, and planning to bring them and their communities into the fold on the latest Participation projects. With the overarching theme being leadership training, we had two main tracks. The first was Campus Campaign, a privacy focused effort to engage students as we work more with that demographic this year. Second, and the focus of this post, is Connected Devices.

Unchartered Ground

As we built out a solid Firefox OS Participation program last year, the organisation is moving more into Connected Devices. The challenge we have is to evolve the program to fit the new direction. However, the strategy and timeline has not been finalised, so in Singapore we needed to get people excited in a broader sense on what we are doing in this next phase of computing, and see what could be done now to start hacking on things.

Sessions

We had three main sessions during the weekend. Here is a brief summary of each.

Strategy And Update

We haven’t been standing still since we announced the changes in December. During this sessions John Bernard walked us through why Mozilla is moving in this direction, how the Connected Devices team has been coming together, and how initial project proposals have been going through the ‘gating process’. The team will be structured in three parts based on the three core pillars of Core, Consumer, and Collaboration. The latter is in essence a participation team embedded, led by John. We talked about some of the early project ideas floating around, and we discussed possible uses of the foxfooding program, cunningly labelled ‘Outside the Fox’.

John Bernard presenting

John Bernard presenting

Dietrich Ayala then jumped in and talked about some of the platform APIs that we can use today to hook together the Web of Things. There are many ways to experiment today using existing Firefox OS devices and even Firefox desktop and mobile. The set of APIs in Firefox OS phones allow access to a wide range of sensors, enabling experimentation with physical presence detection, speech synthesis and recognition, and many types of device connectivity.

Check out the main sessions slides and Dietrich’s slides.

Research Co-Creation Workshop

Led by Rina Jensen and Jared Cole, the Participation and Connected Devices teams have been working on a project to explore the open source community and understand what makes people contribute and be part of the communities, from open hardware projects to open data projects. During the session, some of the key insights from that work were shared. The group then partook in co-creation exercises, coming up with ideas for what an ideal contributor experience. During the session, a few key research insights were shared and provided to the group as input for a co-creation exercise. The participants then spent the next hour generating ideas focused on the ideal contributor experience. Rina and Jared are going to continue working closely with the Participation and Connected Devices teams to come up with a clear set of actionable recommendations.

Flore leading the way during co-creation exercise

Flore leading the way during co-creation exercise

You can find more information and links to session materials on the session wiki page.

Designing and Planning for Participation

True participation is working on all aspects of a project, from ideation, through implementation, to launch and beyond. The purpose of this session was two-fold:

  1. How can we design together an effective participation program for Connected Devices
  2. What can we start working on now. We wanted attendees to share their project ideas and start setting up the infrastructure NOW.

On the first topic, we didn’t get very far on the day due to time constraints, but it is something we work on all the time of course at Mozilla. We have a good foundation built with the Firefox OS Participation project. Connected Devices is a field where we can innovate and excel, and we see a lot of excitement for all Mozillians to lead the way here. This discussion will continue.

For the second topic, we wanted to come out of the weekend with something tangible, to send a strong message that volunteer leadership is looking to the future and are ready to build things now. We heard some great ideas, and then broke out into teams to start working on them. The result is thirteen projects to get some energy behind, and I’m sure many more will arise.

Read more about the projects.

In order to accelerate the next stage of tinkering and ideation, we’ve set up a small Mozilla Reps innovation fund to hopefully set in motion a more dynamic environment in which Mozillians can feel at home in.

Working on a project idea

Working on a project idea

To Conclude

Connected Devices, Internet of Things, Web of Things. You have heard many labels for essentially is the next era of computing. At Mozilla we want to ensure that technology-wise the Web is at the forefront of this revolution, and that the values we hold dear such as privacy are central. Now more than ever, open is important. Our community leaders are ready. Are you?

QMOFirefox 45 Beta 3 Testday Results

Hello Mozillians!

As you may already know, last Friday – February 5th – we held a new Testday, for Firefox 45 Beta 3 and it was another successful event!

We’d like to take this opportunity to thank Iryna Thompson, Chandrakant Dhutadmal, Mohammed Adam, Vuyisile Ndlovu, Spandana Vadlamudi, Ilse Macías, Bolaram Paul, gaby2300, Ángel Antonio, Preethi Dhinesh  and the people from our Bangladesh Community: Rezaul Huque Nayeem, Hossain Al Ikram, Raihan Ali, Moniruzzaman, Khalid Syfullah Zaman, Amlan BIswas, Abdullah Umar Nasib, Najmul Amin, Pranjal Chakraborty, Azmina Akter Papeya, Shaily Roy, Kazi Nuzhat Tasnem, Md Asaduzzaman John, Md.Tarikul Islam Oashi, Fahmida Noor, Fazle Rabbi, Md. Almas Hossain, Mahfuza Humayra Mohona, Syed Nayeem Roman, Saddam Hossain, Shahadat  Hossain, Abdullah Al Mamun, Maruf Rahman, Muhtasim kabir, Ratul Ahmed, Mita Halder, Md Faysal Rabib, Tanvir Rahman, Tareq Saifullah, Dhiman roy, Parisa Tabassum, SamadTalukdar, Zubair Ahmed, Toufiqul haque Mamun, Md. Nurnobi, Sauradeep Dutta, Noban Hasan, Israt  jahan, Md. Nazmus Shakib (Robin), Zayed News, Ashickur Rahman, Hasna Hena, Md. Rahimul islam, Mohammad Maruf Islam, Mohammed Jawad Ibne Ishaque, Kazi Nuzhat Tasnem and Wahiduzzaman Hridoy for getting involved in this event and making Firefox as best as it could be.

Results:

Also a big thank you goes to all our active moderators.

Keep an eye on QMO for upcoming events!

Gregory SzorcMozReview Git Support and Improved Commit Mapping

MozReview - Mozilla's Review Board based code review tool - now supports ingestion from Git. Previously, it only supported Mercurial.

Instructions for configuring Git with MozReview are available. Because blog posts are not an appropriate medium for documenting systems and processes, I will not say anything more here on how to use Git with MozReview.

Somewhat related to the introduction of Git support is an improved mechanism for mapping commits to existing review requests.

When you submit commits to MozReview, MozReview has to decide how to map those commits to review requests in Review Board. It has to choose whether to recycle an existing review request or create a new one. When recycling, is has to pick an appropriate one. If it chooses incorrectly, wonky things can happen. For example, a review request could switch to tracking a new and completely unrelated commit. That's bad.

Up until today, our commit mapping algorithm was extremely simple. Yet it seemed to work 90% of the time. However, a number of people found the cracks and complained. With Git support coming online, I had a feeling that Git users would find these cracks with higher frequency than Mercurial users due to what I perceive to be variations in the commit workflows of Git versus Mercurial. So, I decided to proactively improve the commit mapping before the Git users had time to complain.

Both the Git and Mercurial MozReview client-side extensions now insert a MozReview-Commit-ID metadata line in commit messages. This line effectively defines a (likely) unique ID that identifies the commit across rewrites. When MozReview maps commits to review requests, it uses this identifier to find matches. What this means is that history rewriting (such as reordering commits) should be handled well by MozReview and should not confuse the commit mapping mechanism.

I'm not claiming the commit mapping mechanism is perfect. In fact, I know of areas where it can still fall apart. But it is much better than it was before. If you think you found a bug in the commit mapping, don't hesitate to file a bug. Please have it block bug 1243483.

A side-effect of introducing this improved commit mapping is that commit messages will have a MozReview-Commit-ID line in them. This may startle some. Some may complain about the spam. Unfortunately, there's no better alternative. Both Mercurial and Git do support a hidden key-value dictionary for each commit object. In fact, the MozReview Mercurial extension has been storing the very commit IDs that now appear in the commit message in this dictionary for months! Unfortunately, actually using this hidden dictionary for metadata storage is riddled with problems. For example, some Mercurial commands don't preserve all the metadata. And accessing or setting this data from Git is painful. While I wish this metadata (which provides little value to humans) were not located in the commit message where humans could be bothered by it, it's really the only practical place to put it. If people find it super annoying, we could modify Autoland to strip it before landing. Although, I think I like having it preserved because it will enable some useful scenarios down the road, such as better workflows for uplift requests. It's also worth noting that there is precedent for storing unique IDs in commit messages for purposes of commit mapping in the code review tool: Gerrit uses Change-ID lines.

I hope you enjoy the Git support and the more robust commit to review request mapping mechanism!

Daniel GlazmanInventory and Strategy

“There’s class warfare, all right, but it’s my class, the native class, that’s making war, and we’re winning.” -- Android and iOS, blatantly stolen from Warren Buffet

Firefox OS tried to bring Web apps to the mobile world and it failed. It has been brain dead - for phones - for three days and the tubes preserving its life will be turned off in May 2016. I don't believe at all myself in the IoT space being a savior for Mozilla. There are better and older competitors in that space, companies or projects that bring smaller, faster, cleaner software architectures to IoT where footprint and performance are an even more important issue than in the mobile space. Yes, this is a very fragmented market; no, I'm not sure FirefoxOS can address it and reach the critical mass. In short, I don't believe in it at all.

Maybe it's time to discuss a little bit a curse word here: strategy. What would be a strategy for the near- and long-term future for Mozilla? Of course, what's below remains entirely my own view and I'm sure some readers will find it pure delirium. I don't really mind.

To do that, let's look a little bit at what Mozilla has in hands, and let's confront that and the conclusion drawn from the previous lines: native apps have won, at least for the time being.

  • Brains! So many hyper-talented brains at Mozilla!
  • Both desktop and mobile knowledge
  • An excellent, but officially unmaintained, runtime
  • Extremely high expertise on Web Standards and implementation of Web Standards
  • Extremely high expertise on JS
  • asm.js
  • Gaia, that implements a partial GUI stack from html but limited to mobile

We also need to take a look at Mozilla's past. This is not an easy nor pleasant inventory to make but I think it must be done here and to do it, we need to go back as far in time as the Netscape era.

Technology Year(s) Result
Anya 2003 AOL (Netscape's parent company) did not want of Anya, a remote browser moving most of the CPU constraints to the server, and it died despite of being open-sourced by its author. At the same time, Opera successfully launched Opera Mini and eventually acquired its SkyFire competitor. Opera Mini has been a very successful product on legacy phones and even smartphones in areas with poor mobile connectivity.
XUL 2003- Netscape - and later Mozilla - did not see any interest in bringing XUL to Standards committees. When competitors eventually moved to XML-based languages for UI, they adopted solutions (XAML, Flex, ...) that were not interoperable with it.
Operating System 2003- A linux+Gecko Operating System is not a new idea. It was already discussed back in 2003 - yes, 2003 - at Netscape and was too often met with laughter. It was mentioned again multiple times between 2003 and 2011, without any apparent success.
Embedding 2004- Embedding has always been a poor parent in Gecko's family. Officially dropped loooong ago, it drove embedders to WebKit and then Blink. At the time embedding should have been improved, the focus was solely on Firefox for desktop. If I completely understand the rationale behind a focus on Firefox for desktop at that time, the consequences of abandoning Embedding have been seriously underestimated.
Editing 2005- Back in 2004/2005, it was clear Gecko had the best in-browser core editor on the market. Former Netscape editor peers working on Dreamweaver compared mozilla/editor and what Macromedia/Adobe had in hands. The comparison was vastly in favor of Mozilla. It was also easy to predict the aging Dreamweaver would soon need a replacement for its editor core. But editing was considered as non-essential at that time, more a burden than an asset, and no workforce was permanently assigned to it.
Developer tools 2005 In 2005, Mozilla was so completely mistaken on Developer Tools, a powerful attractor for early adopters and Web Agencies, that it wanted to get rid of the error console. At the same moment, the community was calling for more developer tools.
Runtime 2003- XULRunner has been quite successful for such a complex technology. Some rather big companies believed enough in it to implement apps that, even if you don't know their name, are still everywhere. As an example, here's at least one very large automotive group in Europe, a world-wide known brand, that uses XULRunner in all its test environments for car engines. That means all garages dealing with that brand use a XULRunner-fueled box...
But unfortunately, XULrunner was never considered as essential, up to the point its name is still a codename. For some time, the focus was instead given to GRE, a shared runtime that was doomed to fail from the very first minute.
Asian market 2005 While the Asian market was exploding, Gecko was missing a major feature: vertical writing. It prevented Asian embedders from considering Gecko as the potential rendering engine to embed in Ebook reading systems. It also closed access to the Asian market for many other usages. But vertical writing did not become an issue to fix for Mozilla until 2015.
Thunderbird 2007 Despite of growing adoption of Thunderbird in governmental organizations and some large companies, Mozilla decided to spin off Thunderbird into a Mail Corporation because it was unable to get a revenue stream from it. MailCo was eventually merged back with Mozilla and Thunderbird is again in 2015/2016 in limbos at Mozilla.
Client Customization Kit 2003- Let's be clear, the CCK has never been seen as a useful or interesting project. Maintained only by the incredible will and talent of a single external contributor, many corporations rely on it to release Firefox to their users. Mozilla had no interest in corporate users. Don't we spend only 60% of our daily time at work?
E4X 2005-2012 Everyone had high expectations about E4X and and many were ready to switch to E4X to replace painful DOM manipulations. Unfortunately, it never allowed to manipulate DOM elements (BMO bug 270553), making it totally useless. E4X support was deprecated in 2012 and removed after Firefox 17.
Prism (WebRunner) 2007-2009 Prism was a webrunner, i.e. a desktop platform to run standalone self-contained web-based apps. Call them widgets if you wish. Prism was abandoned in 2009 and replaced by Mozilla Chromeless that is itself inactive too.
Marketplace 2009 Several people called for an improved marketplace where authors could sell add-ons and standalone apps. That required a licensing mechanism and the possibility to blackbox scripting. It was never implemented that way.
Browser Ballot 2010 The BrowserChoice.eu thing was a useless battle. If it brought some users to Firefox on the Desktop, the real issue was clearly the lack of browser choice on iOS, world-wide. That issue still stands as of today.
Panorama (aka Tab Groups) 2010 When Panorama reached light, some in the mozillian community (including yours truly) said it was bloated, not extensible, not localizable, based on painful code, hard to maintain on the long run and heterogeneous with the rest of Firefox, and it was trying to change the center of gravity of the browser. Mozilla's answer came rather sharply and Panorama was retained. In late 2015, it was announced that Panorama will be retired because it's painful to maintain, is heterogeneous with the rest of Firefox and nobody uses it...
Jetpack 2010 Jetpack was a good step on the path towards HTML-based UI but a jQuery-like framework was not seen by the community as what authors needed and it missed a lot of critical things. It never really gained traction despite of being the "official" add-on way. In 2015, Mozilla announced it will implement the WebExtensions global object promoted by Google Chrome and WebExtensions is just a more modern and better integrated JetPack on steroids. It also means being Google's assistant to reach the two implementations' standardization constraint again...
Firefox OS 2011 The idea of a linux+Gecko Operating System finally touched ground. 4 years later, the project is dead for mobile.
Versioning System 2011 When Mozilla moved to faster releases for Firefox, large corporations having slower deployment processes reacted quite vocally. Mozilla replied it did not care about dinosaurs of the past. More complaints led to ESR releases.
Add-ons 2015 XUL-based add-ons have been one of the largest attractors to Firefox. AdBlock+ alone deserves kudos, but more globally, the power of XUL-based add-ons that could interact with the whole Gecko platform and all of Firefox's UI has been a huge market opener. In 2015/2016, Mozilla plans to ditch XUL-based add-ons without having a real replacement for them, feature-per-feature.
Evangelism 2015 While Google and Microsoft have built first-class tech-evangelism teams, Mozilla made all its team flee in less than 18 months. I don't know (I really don't) the reason behind that intense bleeding but I read it as a very strong warning signal.
Servo 2016 Servo is the new cool kid on the block. With parallel layout and a brand new architecture, it should allow new frontiers in the mobile world, finally unleashing the power of multicores. But instead of officially increasing the focus on Servo and decreasing the focus on Gecko, Gecko is going to benefit from Servo's rust-based components to extend its life. This is the old sustaining/disruptive paradigm from Clayton Christensen.

(I hope I did not make too many mistakes in the table above. At least, that's my personal recollection of the events. If you think I made a mistake, please let me know and I'll update the article.)

Let's be clear then: Mozilla really succeeded only three times. First, with Firefox on the desktop. Second, enabling the Add-ons ecosystem for Firefox. Third, with its deals with large search engine providers. Most of the other projects and products were eventually ditched for lack of interest, misunderstanding, time-to-market and many other reasons. Mozilla is desperately looking for a fourth major opportunity, and that opportunity can only extend the success of the first one or be entirely different.

The market constraints I see are the following:

  • Native apps have won
  • Mozilla's reputation as an embedded solution's provider among manufacturers will probably suffer a bit from Firefox OS for phones' death. BTW, it probably suffers a bit among some employees too...

Given the assets and the skills, I see then only two strategic axes for Moz:

  1. Apple must accept third-party rendering engines even if it's necessary to sue Apple.
  2. If native apps have won, Web technologies remain the most widely adopted technologies by developers of all kinds and guess what, that's exactly Mozilla's core knowledge! Let's make native apps from Web technos then.

I won't discuss item 1. I'm not a US lawyer and I'm not even a lawyer. But for item 2, here's my idea:

  1. If asm.js "provides a model closer to C/C++" (quote from asmjs.org's FAQ), it's still not possible to compile asm.js-based JavaScript into native. I suggest to define a subset of ES2015/2016 that can be compiled to native, for instance through c++, C#, obj-C and Java. I suggest to build the corresponding multi-target compiler. Before telling me it's impossible, please look at Haxe.
  2. I suggest to extend the html "dialect" Gaia implements to cross-platform native UI and submit it immediately to Standard bodies. Think Qt's ubiquity. The idea is not to show native-like (or even native) UI inside a browser window... The idea is to directly generate browser-less native UI from a html-based UI language, CSS and JS that can deal with all platform's UI elements. System menus, dock, icons, windows, popups, notifications, drawers, trees, buttons, whatever. Even if compiled, the UI should be DOM-modifyable just like XUL is today.
  3. WebComponents are ugly, and Google-centric. So many people think that and so few dare saying it... Implementing them in Gecko acknowledges the power of Gmail and other Google tools but WebComponents remain ugly and make Mozilla a follower. I understand why Firefox needs it. But for my purpose, a simpler and certainly cleaner way to componentize and compile (see item 1) the behaviours of these components to native would be better.
  4. Build a cross-platform cross-device html+CSS+JS-based compiler to native apps from the above. Should be dead simple to install and use. A newbie should be able to get a native "Hello World!" native app in minutes from a trivial html document. When a browser's included in the UI, make Gecko (or Servo) the default choice.
  5. Have a build farm where such html+CSS+JS are built for all platforms. Sell that service. Mozilla already knows pretty well how to do build farms.

That plan addresses:

  • Runtime requests
  • Embedding would become almost trivial, and far easier than Chromium Embedded Framework anyway... That will be a huge market opener.
  • XUL-less future for Firefox on Desktop and possibly even Thunderbird
  • XUL-less future for add-ons
  • unique source for web-based app and native app, whatever the platform and the device
  • far greater performance on mobile
  • A more powerful basis for Gaia's future
  • JavaScript is currently always readable through a few tools, from the Console to the JS debugger and app authors don't want that.
  • a very powerful basis for Gaming, from html and script
  • More market share for Gecko and/or Servo
  • New revenue stream.

There are no real competitors here. All other players in that field use a runtime that does not completely compile script to native, or are not based on Web Standards, or they're not really ubiquitous.

I wish the next-generation native source editor, the next-gen native Skype app, the next-gen native text processor, the next-gen native online and offline twitter client, the next native Faecbook app, the next native video or 3D scene editor, etc. could be written in html+CSS+ECMAScript and compiled to native and if they embed a browser, let be it a Mozilla browser if that's allowed by the platform.

As I wrote at the top of this post, you may find the above unfeasible, dead stupid, crazy, arrogant, expensive, whatever. Fine by me. Yes, as a strategy document, that's rather light w/o figures, market studies, cost studies, and so on. Absolutely, totally agreed. Only allow me to think out loud, and please do the same. I do because I care.

Updates:

  • E4X added
  • update on Jetpack, based on feedback from Laurent Jouanneau
  • update on Versioning and ESR, based on feedback from Fabrice Desré (see comments below)

Clarification: I'm not proposing to do semi-"compilation" of html à la Apache Cordova. I suggest to turn a well chosen subset of ES2015 into really native app and that's entirely different.

This Week In RustThis Week in Rust 117

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: Vikrant and Andre.

Updates from Rust Community

News & Blog Posts

Notable New Crates & Project Updates

Updates from Rust Core

121 pull requests were merged in the last week.

Notable changes

New Contributors

  • Alexander Lopatin
  • Brandon W Maister
  • Nikita Baksalyar
  • Paul Smith
  • Prayag Verma
  • qpid
  • Reeze Xia
  • Ryan Thomas
  • Sandeep Datta
  • Sean Leffler

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

Tweet us at @ThisWeekInRust to get your job offers listed here!

Crate of the Week

The week's Crate of the Week is roaring, the Rust version of Prof. D. Lemire's compressed bitmap data structure. I can personally attest that both the Rust and Java versions compare very favorably in both speed and size to other bit sets and are easy to use.

Thanks to polyfractal for the suggestion.

Submit your suggestions for next week!

Mike HommeySSH through jump hosts, revisited

Close to 7 years ago, I wrote about SSH through jump hosts. Twice. While the method used back then still works, Openssh has grown an new option in version 5.3 that allows it to be simplified a bit, by not using nc.

So here is an updated rule, version 2016:

Host *+*
ProxyCommand ssh -W $(echo %h | sed 's/^.*+//;s/^\([^:]*$\)/\1:22/') $(echo %h | sed 's/+[^+]*$//;s/\([^+%%]*\)%%\([^+]*\)$/\2 -l \1/;s/:\([^:+]*\)$/ -p \1/')

The syntax you can use to connect through jump hosts hasn’t changed compared to previous blog posts:

  • With one jump host:
    $ ssh login1%host1:port1+host2:port2 -l login2
  • With two jump hosts:
    $ ssh login1%host1:port1+login2%host2:port2+host3:port3 -l login3
  • With three jump hosts:
    $ ssh login1%host1:port1+login2%host2:port2+login3%host3:port3+host4:port4 -l login4
  • etc.

Logins and ports can be omitted.

Update: Add missing port to -W flag when one is not given.

Jennifer BorissOnto New Challenges

After a wild and wonderful year and a half at Reddit, I’ve decided to move on. And when I say wild, I mean it. The past year has been the […]

Michael KohlerMozilla Switzerland Goals H1 2016

Back in November we had a Community Meetup. The goal was to get a current status on the Community and define plans and goals for 2016. To do that, we started with a SWOT-Analysis. You can find it here.

With these remarks in mind, we started to define goals for 2016. Since there are a lot of changes within one year, the goals will currently only focus on the first part of the year. Then we can evaluate them, shift metrics if needed, and define new goals. This allows us to be more flexible.

The goals are highly influenced by the OKR (Objective – Key Results) Framework. To document open issues that support this goal, I have created a repository in our MozillaCH GitHub organization. The goal is to assign the “overall goal” label to each issue. You can find a good documentation on GitHub issues in their documentation. There is a template you can use for new issues.

  • Objective 1: The community is vibrant and active due to structured contribution areas
  • Objective 2: MozillaCH is a valuable partner for privacy in Switzerland
  • Objective 3: There is a vibrant community in the “Romandie” which is part of the overall community
  • Objective 4: The MozillaCH website is the place to link to for community topics
  • Objective 5: With talks and events we increase our reach and provide a valuable information source regarding the Open Web
  • Objective 6: Social Media is a crucial part of our activities providing valuable information about Mozilla and the Open Web

CRhvD5rWwAEDmGp

We know that not all of those goals are easily achievable, but this gives us a good way to be ambitious. To a successful first half of 2016, let’s bring our community further and keep rocking the Open Web!

CL52IqpWIAAQN0Y

Cameron KaiserImagine there's no Intel transition ...

... and with a 12-core POWER8 workstation, it's easy if you try. Reported to me by a user is the Raptor Engineering Talos Secure Workstation, the first POWER workstation I've seen in years since the last of the PowerPC Intellistations. You can sign up for preorders so they know you're interested. (I did, of course. Seriously. In fact, I'm actually considering buying two, one as a workstation and the second as a new home server.) Since it's an ATX board, you can just stick it in any case you like with whatever power supply and options you want and configure to taste.

Before you start hyperventilating over the $3100 estimated price (which includes the entry-level 8-core CPU), remember that the Quad G5, probably the last major RISC workstation, cost $3300 new and this monster would drive it into the ground. Plus, at "only" 130 watts TDP, it certainly won't run anywhere near as hot as the G5 did either. Likely it will run some sort of Linux, though I can't imagine with its open architecture that the *BSDs wouldn't be on it like a toupee on William Shatner. Let's hope they get enough interest to produce a few, because I'd love to have an excuse to buy one and I don't need much of an excuse.

Mozilla Addons BlogHi, I’m Your New AMO Editor

jetpackYou may have wondered who this “Scott DeVaney” is who posted February’s featured add-ons. Well it’s me. I just recently joined AMO as your new Editorial & Campaign Manager. But I’m not new to Mozilla; I’ve spent the past couple years managing editorial for Firefox Marketplace.

This is an exciting deal, because my job will be to not only maintain the community-driven editorial processes we have in place today, but to grow the program and build new endeavors designed to introduce even more Firefox users to the wonders of add-ons.

In terms of background, I’ve been editorializing digital content since 1999 when I got my first internet job as a video game editor for the now-dead CheckOut.com. That led to other editorial gigs at DailyRadar, AtomFilms, Shockwave, Comedy Central, and iTunes (before all that I spent a couple years working as a TV production grunt where my claim to fame is breaking up a cast brawl on the set of Saved by the Bell—The New Class; but that’s a story for a different blog.)

I’m sdevaney on IRC, so don’t be a stranger.

Mozilla Addons BlogAdd-on Compatibility for Firefox 45

Firefox 45 will be released on March 8th. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 45 for Developers, so you should also give it a look.

General

UI

XPCOM

Signing

  • Firefox is currently enforcing add-on signing, with a preference to override it. Firefox 46 will remove the preference entirely , which means your add-on will need to be signed in order to run in release versions of Firefox. You can read about your options here.

New

  • Support a simplified JSON add-on update protocol. Firefox now supports a JSON update file for add-ons that manage their own automatic updates, as an alternative to the existing XML format. For new add-ons, we suggest using the JSON format. You shouldn’t immediately switch for older add-ons until most of your users are on 45 and later.

Let me know in the comments if there’s anything missing or incorrect on these lists. If your add-on breaks on Firefox 45, I’d like to know.

The automatic compatibility validation and upgrade for add-ons on AMO will happen in the coming weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 44.

Daniel PocockGiving up democracy to get it back

Do services like Facebook and Twitter really help worthwhile participation in democracy, or are they the most sinister and efficient mechanism ever invented to control people while giving the illusion that they empower us?

Over the last few years, groups on the left and right of the political spectrum have spoken more and more loudly about the problems in the European Union. Some advocate breaking up the EU, while behind the scenes milking it for every handout they can get. Others seek to reform it from within.

Yanis Varoufakis on motorbike

Most recently, former Greek finance minister Yanis Varoufakis has announced plans to found a movement (not a political party) that claims to "democratise" the EU by 2025. Ironically, one of his first steps has been to create a web site directing supporters to Facebook and Twitter. A groundbreaking effort to put citizens back in charge? Or further entangling activism in the false hope of platforms that are run for profit by their Silicon Valley overlords? A Greek tragedy indeed, in the classical sense.

Varoufakis rails against authoritarian establishment figures who don't put the citizens' interests first. Ironically, big data and the cloud are a far bigger threat than Brussels. The privacy and independence of each citizen is fundamental to a healthy democracy. Companies like Facebook are obliged - by law and by contract - to service the needs of their shareholders and advertisers paying to study and influence the poor user. If "Facebook privacy" settings were actually credible, who would want to buy their shares any more?

Facebook is more akin to an activism placebo: people sitting in their armchair clicking to "Like" whales or trees are having hardly any impact at all. Maintaining democracy requires a sufficient number of people to be actively involved, whether it is raising funds for worthwhile causes, scrutinizing the work of our public institutions or even writing blogs like this. Keeping them busy on Facebook and Twitter renders them impotent in the real world (but please feel free to alert your friends with a tweet)

Big data is one of the areas that requires the greatest scrutiny. Many of the professionals working in the field are actually selling out their own friends and neighbours, their own families and even themselves. The general public and the policy makers who claim to represent us are oblivious or reckless about the consequences of this all-you-can-eat feeding frenzy on humanity.

Pretending to be democratic is all part of the illusion. Facebook's recent announcement to deviate from their real-name policy is about as effective as using sunscreen to treat HIV. By subjecting themselves to the laws of Facebook, activists have simply given Facebook more status and power.

Data means power. Those who are accumulating it from us, collecting billions of tiny details about our behavior, every hour of every day, are fortifying a position of great strength with which they can personalize messages to condition anybody, anywhere, to think the way they want us to. Does that sound like the route to democracy?

I would encourage Mr Varoufakis to get up to speed with Free Software and come down to Zurich next week to hear Richard Stallman explain it the day before launching his DiEM25 project in Berlin.

Will the DiEM25 movement invite participation from experts on big data and digital freedom and make these issues a core element of their promised manifesto? Is there any credible way they can achieve their goal of democracy by 2025 without addressing such issues head-on?

Or put that the other way around: what will be left of democracy in 2025 if big data continues to run rampant? Will it be as distant as the gods of Greek mythology?

Still not convinced? Read about Amazon secretly removing George Orwell's 1984 and Animal Farm from Kindles while people were reading them, Apple filtering the availability of apps with a pro-Life bias and Facebook using algorithms to identify homosexual users.

Chris CooperRelEng & RelOps Weekly Highlights - February 5, 2016

This week, we have two new people starting in Release Engineering: Aki Sasaki (:aki) and Rok Garbas (:garbas). Please stop by #releng and say hi!

Modernize infrastructure:

This week, Jake and Mark added check_ami.py support to runner for our Windows 2008 instances running in Amazon. This is an important step towards parity with our Linux instances in that it allows our Windows instances to check when a newer AMI is available and terminate themselves to be re-created with the new image. Until now, we’ve need to manually refresh the whole pool to pick up changes, so this is a great step forward.

Also on the Windows virtualization front, Rob and Mark turned on puppetization of Windows 2008 golden AMIs this week. This particular change has taken a long time to make it to production, but it’s hard to overstate the importance of this development. Windows is definitely *not* designed to manage its configuration via puppet, but being able to use that same configuration system across both our POSIX and Windows systems will hopefully decrease the time required to update our reference platforms by substantially reducing the cognitive overhead required for configuration changes. Anyone who remembers our days using OPSI will hopefully agree.

Improve CI pipeline:

Ben landed a Balrog patch that implements JSONSchemas for Balrog Release objects. This will help ensure that data entering the system is more consistent and accurate, and allows humans and other systems that talk to Balrog to be more confident about the data they’ve constructed before they submit it.

Ben also enabled caching for the Balrog admin application. This dramatically reduces the database and network load it uses, which makes it faster, more efficient, and less prone to update races.

Release:

We’re currently on beta 3 for the Firefox 45. After all the earlier work to unhork gtk3 (see last week’s update), it’s good to see the process humming along.

A small number of stability issues have precipitated a dot release for Firefox 44. A Firefox 44.0.1 release is currently in progress.

Operational:

Kim implemented changes to consume SETA information for Android API 15+ data using data from API 11+ data until we have sufficient data for API 15+ test jobs. This reduced the number of high number of pending counts for the AWS instance types used by Android. (https://bugzil.la/1243877)

Coop (hey, that’s me!) did a long-overdue pass of platform support triage. Lots of bugs got closed out (30+), a handful actually got fixed, and a collection of Windows test failures got linked together under a root cause (thanks, philor!). Now all we need to do is find time to tackle the root cause!

See you next week!

Mozilla WebDev CommunityExtravaganza – February 2016

Once a month, web developers from across Mozilla get together to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, or view a recording of the meeting in Air Mozilla. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Git Submodules are Gone from MDN

First up was jezdez with news about MDN moving away from using git submodules to pull in dependencies. Instead, MDN now uses pip to pull in dependencies during deployment. Hooray!

Careers now on AWS/Deis

Next was giorgos who let us know that careers.mozilla.org has moved over to the Engagement Engineering Deis cluster on AWS. For deployment, the site has Travis CI build a Docker image and run tests against it. If the tests pass, the image is deployed directly to Deis. Neat!

Privacy Day

jpetto helped ship the Privacy Day page. It includes a mailing list signup form as well as instructions for several platforms on how to update your software to stay secure.

Automated Functional Testing for Mozilla.org

agibson shared news about the migration of previously-external functional tests for mozilla.org to live within the Bedrock repository itself. This allows us to run the tests, which previously were run by the WebQA team against live environments, whenever the site is deployed to dev, stage, or production. Having the functional tests be a part of the build pipeline ensures that developers are aware when the tests are broken and can fix them before deploying broken features. A slide deck is available with more details.

Peep 3.x

ErikRose shared news about the 3.0 (and 3.1) release of Peep, which helps smooth the transition from Peep to Pip 8, which now supports hashed requirements natively. The new Peep includes a peep port command for porting Peep-compatible requirements files to the new Pip 8 format.

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.

Jazzband

jezdez shared news about JazzBand, a cooperative experiment to reduce the stress of maintaining Open Source software alone. The group operates as a Github organization that anyone can join and transfer projects to. Anyone in the JazzBand can access JazzBand projects, allowing projects that would otherwise die due to lack of activity thrive thanks to the community of co-maintainers.

Notable projects already under the JazzBand include django-pipeline and django-configurations. The group is currently focused on Python projects and is still figuring out things like how to secure releases on PyPI.

django-configurations 1.0

Speaking of the JazzBand, members of the collective pushed out the 1.0 release of django-configurations, which is an opinionated library for writing class-based settings files for Django. The new release adds Django 1.8+ support as well as several new features.

Roundtable

The Roundtable is the home for discussions that don’t fit anywhere else.

Travis CI Sudo for Specific Environments

Next was ErikRose with an undocumented tip for Travis CI builds. As seen on the LetsEncrypt travis.yml, you can specify sudo: required for a specific entry in the build matrix to run only that build on Travis’ container-based infrastructure.

Docker on OS X via xhyve

Erik also shared xhyve, which is a lightweight OS X hypervisor. It’s a port of bhyve, and can be used as the backend for running Docker containers on OS X instead of VirtualBox. Recent changes that have made this more feasible include the removal of a 3 gigabyte RAM limit and experimental NFS support that, according to Erik, is faster than VirtualBox’s shared folder functionality. Check it out!


If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

Mozilla Security BlogMozilla Winter of Security-2015 MozDef: Virtual Reality Interface

Mozilla runs Winter of Security (MWoS) every year to give folks an opportunity to contribute to ongoing security projects in flight. This year an ambitious group took on the task of creating a new visual interface in our SIEM overlay for Elastic  Search that we call MozDef: The Mozilla Defense Platform.

Security personnel are in high demand and analyst skill sets are difficult to maintain. Rather than only focusing on making people better at security, I’m a firm believer that we need to make security better at people. Interfaces that are easier to comprehend and use seem to be a worthwhile investment in that effort and I’m thrilled with the work this team has done.

They’ve wrapped up their project with a great demo of their work. If you are interested in security automation tools and alternative user interfaces, take a couple minutes and check out their work over at air mozilla.

 

Air MozillaMozilla Winter of Security-2015 MozDef: Virtual Reality Interface

Mozilla Winter of Security-2015 MozDef: Virtual Reality Interface MWOS Students give an awesome demo of their work adding a unique interface to MozDef: The Mozilla Defense Platform.

Chris CooperWelcome (back), Aki!

Aki in Slave UnitThis actually is Aki.

In addition to Rok who also joined our team week, I’m ecstatic to welcome back Aki Sasaki to Mozilla release engineering.

If you’ve been a Mozillian for a while, Aki’s name should be familiar. In his former tenure in releng, he helped bootstrap the build & release process for both Fennec *and* FirefoxOS, and was also the creator of mozharness, the python-based script harness that has allowed us to push so much of our configuration back into the development tree. Essentially he was devops before it was cool.

Aki’s first task in this return engagement will be to figure out a generic way to interact with Balrog, the Mozilla update server, from TaskCluster. You can follow along in bug 1244181.

Welcome back, Aki!

Chris CooperWelcome, Rok!

The RockThis is *not* our Rok.

I’m happy to announce a new addition to the Mozilla release engineering. This week, we are lucky to welcome Rok Garbas to the team.

Rok is a huge proponent of Nix and NixOS. Whether we end up using those particular tools or not, we plan to leverage his experience with reproducible development/production environments to improve our service deployment story in releng. To that end, he’s already working with Dustin who has also been thinking about this for a while.

Rok’s first task is to figure out how the buildbot-era version of clobberer, a tool for clearing and resetting caches on build workers, can be rearchitected to work with TaskCluster. You can follow along in bug 1174263 if you’re interested.

Welcome, Rok!

Carsten BookSheriff Newsletter for January 2016

Hi,
To give a little insight into our work and make our work more visible to our Community we decided to create a  monthly report of what’s going on in the Sheriffs Team.
If you have questions or feedback, just let us know!
In case you don’t know who the sheriffs are, or to check if there are current issues on the tree, see:
Topics of this month!
1. How-To article of the month
2. Get involved
3. Statistics for January
4. Orange Factor
5. Contact
1. How-To article of the month and notable things!
-> In the Sheriff Newsletter we mentioned the “Orange Factor” but what is this ?  It is simply the ratio of oranges (test failures) to test runs. The ideal value is, of course, zero.

Practically, this is virtually impossible for a code base of any substantial size,so it is a matter of policy as to what is an acceptable orange factor.

It is worth noting that the overall orange factor indicates nothing about the severity of the oranges. [4]

The main site where you can checkout the “Orange Factor” is at https://brasstacks.mozilla.com/orangefactor/  and some interesting info’s are here https://wiki.mozilla.org/Auto-tools/Projects/OrangeFactor
-> As you might be aware Firefox OS has moved into Tier 3 Support [5] – this means that there is no Sheriff Support anymore for the b2g-inbound tree.

Also with moving into tier-3 – b2g tests have also moved to tier 3 and this tests are by default “hidden” on treeherder. To view test results as example on treeherder for mozilla-central you need to click on the checkbox in the treeview “show/hide excluded jobs”.

2. Get involved!
Are you interested in helping out by becoming a Community Sheriff? Let us know!
3. Statistics
Intermittent Bugs filed in January  [1]: 667
and of those are closed: 107 [2]
For Tree Closing times and reasons see:
4. Orange Factor
Current Orangefactor [3]: 12.92
5.  How to contact us
There are a lot of ways to contact us. The fastest one is to contact
the sheriff on duty (the one with the |sheriffduty tag on their nick
:) or by emailing sheriffs @ mozilla dot org.

Karl Dubost[worklog] Outreach is hard, Webkit aliasing big progress

Tunes of the week: Earth, Wind and Fire. Maurice White, the founder, died at 74.

WebCompat Bugs

WebKit aliasing

  • When looking for usage of -webkit-mask-*, I remembered that Google Image was a big offender. So I tested again this morning and… delight! They now use SVG. So now, I need to test extensively Google search and check if they can just send us the version they send to Chrome.
  • Testing Google Calendar again on Gecko with Chrome user agent to see how far we are to receive a better user experience. We can't really yet ask Google to send us the same thing they send to Chrome. A couple of glitches here and there. But we are very close. The better would be for Google to fix their CSS, specifically to make flexbox and gradients standard-compatible.
  • The code for the max-width issue (not a bug but implementation differences due to an undefined scenario in the CSS specification) is being worked on by David Baron and reviewed by Daniel Holbert. And this makes me happy, it should solve a lot of the Webcompat bugs reports. Look at the list of SeeAlso in that bug.

Webcompat Life and Working with Developer Tools

  • Changing preferences all the time through "about:config" is multiple step. I liked in Opera Presto how you could link to a specific preference, so I filed a bug for Firefox. RESOLVED. It exists: about:config?filter=webkit and it's bookmark-able.
  • Bug 1245365 - searching attribute values through CSS selectors override the search terms
  • A discussion has been started on improving the Responsive Design Mode of Firefox Developer Tools. I suggested a (too big) list of features that would make my life easier.

Firefox OS Bugs to Firefox Android Bugs

  • Web Compatibility Mozilla employees reduced their support for solving Firefox OS bugs to its minimum. The community is welcome to continue to work on them. But some of these bugs have still an impact on Firefox Android. One good example of this is Bug 959137. Let's come up with a process to deal with those.
  • Another last week todo. I have been closing a lot of old bugs (around 600 in a couple of days) in Firefox OS and Firefox Android in Tech Evangelism product. The reasons for closing them are mostly:
    • the site doesn't exist anymore. (This goes into my list of Web Compatibility axioms: "Wait long enough, every bug disappears.")
    • the site fixed the initial issue
    • the layout.css.prefixes.webkit; true fixes it (see Bug 1213126)
    • the site has moved to a responsive design

Bug 812899 - absolutely positioned element should be vertically centered even if the height is bigger than that of the containing block

This bug was simple at the beginning, but when providing the fix, it broke other tests. It's normal. Boris explained which parts of the code was impacted. But I don't feel I'm good enough yet for touching this. Or it would require patience and step by step guidance. It could be interesting though. I have the feeling I have too much on my plate right now. So a bug to take over!

Testing Google Search On Gecko With Different UA Strings

So last week, I gave myself a Todo "testing Google search properties and see if we can find a version which is working better on Firefox Android than the current default version sent by Google. Maybe testing with Chrome UA and Iphone UA." My preliminary tests sound pretty good.

Reading List

Follow Your Nose

Otsukare!

Karl DubostSteps Before Considering a Bug "Ready for Outreach"

Sometimes another team of Mozilla will ask help from Webcompat team for contacting site owners to fix an issue on their Web site which hinders the user experience on Firefox. Let's go through some tips to maximize the chances of getting results when we outreach.

Bug detection

Bug

A bug has been reported by a user or a colleague. They probably had the issue at the moment they tested. The source of the issue is still quite unknown. Network glitch, specific addon configuration, particular version of Firefox, broken build of Nightly. Assess if the bug is reproducible in the most neutral possible environment. And if it's not already done, write in the comments "Steps to reproduce" and describe all the steps required to reproduce the bug.

Analyzing the issue

Bug

You have been able to reproduce. It is time to understand it. Explain it in very clear terms. Think about the person on the other hand who will need to fix the bug. This person might not be an English native speaker. This person might not be as knowledgeable as you for Web technologies. Provide links and samples to the code with the issue at stake. This will help the person to find the appropriate place in the code.

Providing a fix for the issue

Bug

When explaining the issue, you might have also find out how to fix it or at least one way to fix it. It might not be the way the contacted person will fix it. We do not know their tools, but it will help them to create an equivalent fix that fits in their process. If your proposal is a better practice explained why it is beneficial for performance, longevity, resilience, etc.

Partly a Firefox bug

Bug

The site is not working but it's not entirely their fault. Firefox changed behavior. The browser became more compliant. The feature had a bug which is in the process of being fixed. Think twice before asking for outreach. Sometimes it's just better to push a bit more on fixing the bug in Firefox. It has more chances to be useful for all the unknown sites using the feature. If the site is a big popular site, you might want to ask for outreach, but you need a very good incentive such as improving performances.

Provide a contact hint

Bug

If by chance, you already have contacts in this company, share the data, even try directly to contact that person. If you have information that even bookies don't know about the company, be sure to pass it on for maximizing the chances of outreach. The hardest part is often to find the right person who can help you fix the issue.

Outreach might fail

And here the dirty secret: The outreach might now work or might not be effective right away. Be patient. Be perseverant.

Bug

Fixing a Web site costs a lot more than you can imagine. Time and frustration are part of the equation. Outreach is not a magical bullet. Sometimes it takes months to years to fix an issue. Some reasons why the outreach might fail:

  • Impossible to find the right contacts. Sometimes you can send bug reports through the official channels of communications from the company and have your bug being ignored, misunderstood, unusual. For one site, I had reported for months through the bug reporting system until I finally decided to try a back door with emailing specifically a developer I happened to find the information online. The bug was fixed in a couple of days.
  • Developers have bosses. They first need to comply with what their bosses told them to do. They might not be in a very good position in the company, have conflicts with the management, etc. Or they just don't have the freedom to take the decision that will fix the issue, even a super simple one.
  • Another type of bosses is the client. The client had been sold a Web site with a certain budget. Maintenance is always a contentious issue. The Web agencies are not working for free and even if the bug is there in the first place. The client might not have asked for them to test in that specific browser. Channeling up a bug to the client will mean for the Web agency to bill the client. The client might not want to pay.
  • Sometimes, you will think that you got an easy win. The bug has been solved right away. What you do not know is that the developer in charge had just a put a hack in his code with a beautiful TOFIX that will be crushed at the next change of tools or updates.
  • You just need to upgrade to version X of your JS library: Updating the library will break everything else or will require to test all the zillion of other features that are using this lib in the rest of the site. In a cost/benefit scenario, you have to demonstrate to the dev that the fix is worth his time and the test.
  • Wrong department. Sometimes you get the press service, sometimes the communications department, sometimes the IT department in charge of the office backend or commercial operations systems, but not the Web site.
  • The twitter person is not techy. This happens very often. With the blossoming of social managers (do we still say that?), the people on the front-line are usually helpless when it's really technical. Your only chance is to convince them to communicate with the tech team. But the tech team is despising them because too often they brought bugs which were just useless. If the site is an airline company, a bank, a very consumers oriented service, just forget trying to contact them through twitter.
  • The twitter person is a bot. Check the replies on this twitter account, if there is no meaningful interaction with the public, just find another way.
  • You contacted. Nothing happened. People on the other side forget. I'm pretty sure you are also late replying this email or patching this annoying bug. People ;)
  • The site is just not maintained anymore. No budget. No team. No nobody for moving forward the issue.
  • You might have pissed off someone when contacting. You will never know why. Maybe it was not the right day, maybe it was something in your signature, maybe it was the way you addressed them. Be polite, have empathy.

In the end my message is look for the bare necessities of life.

Bugs images from American entomology : or description of the insects of North America, illustrated by coloured figures from original drawings executed from nature. Thanks to the New-York Public Library.

Otsukare!

Mike HommeyGoing beyond NS_ProcessNextEvent

If you’ve been debugging Gecko, you’ve probably hit the frustration of having the code you’re inspecting being called asynchronously, and your stack trace rooting through NS_ProcessNextEvent, which means you don’t know at first glance how your code ended up being called in the first place.

Events running from the Gecko event loop are all nsRunnable instances. So at some level close to NS_ProcessNextEvent, in your backtrace, you will see Class::Run. If you’re lucky, you can find where the nsRunnable was created. But that requires the stars to be perfectly aligned. In many cases, they’re not.

There comes your savior: rr. If you don’t know it, check it out. The downside is that you must first rr record a Firefox session doing what you’re debugging. Then, rr replay will give you a debugger with the capabilities of a time machine.

Note, I’m kind of jinxed, I don’t do much C++ debugging these days, so every time I use rr replay, I end up hitting a new error. Tip #1: try again with rr’s current master. Tip #2: roc is very helpful. But my takeaway is that it’s well worth the trouble. It is a game changer for debugging.

Anyways, once you’re in rr replay and have hit your crasher or whatever execution path you’re interested in, and you want to go beyond that NS_ProcessNextEvent, here is what you can do:

(rr) break nsEventQueue.cpp:60
(rr) reverse-continue

(Adjust the line number to match wherever the *aResult = mHead->mEvents[mOffsetHead++]; line is in your tree).

(rr) disable
(rr) watch -l mHead->mEvents[mOffsetHead]
(rr) reverse-continue
(rr) disable

And there you are, you just found where the exact event that triggered the executed code you were looking at was put on the event queue. (assuming there isn’t a nested event loop processed during the first reverse-continue)

Rinse and repeat.

Mozilla Addons BlogFebruary 2016 Featured Add-ons

Pick of the Month: Proxy Switcher

by rNeomy
Access all of Firefox’s proxy settings right from the toolbar panel.

“Exactly what I need to switch on the fly from Uni/Work to home.”

Featured: cyscon Security Shield

by patugo GmbH
Cybercrime protection against botnets, malvertising, data breaches, phishing, and malware.

“The plugin hasn’t slowed down my system in any way. Was especially impressed with the Breach notification feature—pretty sure that doesn’t exist anywhere else.”

Featured: Decentraleyes

by Thomas Rientjes
Evade ad tracking without breaking the websites you visit. Decentraleyes works great with other content blockers.

“I’m using it in combination with uBlock Origin as a perfect complement.”

Featured: VimFx

by akhodakivkiy, lydell
Reduce mouse usage with these Vim-style keyboard shortcuts for browsing and navigation.

“It’s simple and the keybindings are working very well. Nice work!!”

Featured: Saved Password Editor

by Daniel Dawson
Adds the ability to create and edit entries in the password manager.

“Makes it very easy to login to any sight, saves the time of manually typing everything in.”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months, so there’s always an opportunity to participate. Stayed tuned to this blog for the next call for applications.

If you’d like to nominate an add-on for featuring, please send it to amo-featured@mozilla.org for the board’s consideration. We welcome you to submit your own add-on!

Support.Mozilla.OrgWhat’s up with SUMO – 4th February

Hello, SUMO Nation!

Last week went by like lightning, mainly due to FOSDEM 2016, but also due to the year speeding up – we’re already in February! What are the traditional festivals in your region this month? Let us know in the comments!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

  • Philipp – for his continuous help with Firefox Desktop and many other aspects of Mozilla and SUMO – Vielen Dank!

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

Most recent SUMO Community meeting

The next SUMO Community meeting…

  • is happening on Monday the 8th of February – join us!
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Developers

Community

Social

Support Forum

Knowledge Base

Localization

  • Please check the for iOS section below for an important announcement!

Firefox

And that’s it – short and sweet for your reading pleasure. We hope you have a great weekend and we are looking forward to seeing you on Monday! Take it easy and keep rocking the helpful web. Over & out!

Air MozillaWeb QA Weekly Meeting, 04 Feb 2016

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Air MozillaReps weekly, 04 Feb 2016

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Gervase MarkhamMOSS Applications Still Open

I am currently running the MOSS (Mozilla Open Source Support) program, which is Mozilla’s program for assisting other projects in the open source ecosystem. We announced the first 7 awardees in December, giving away a total of US$533,000.

The application assessment process has been on hiatus while we focussed on getting the original 7 awardees paid, and while the committee were on holiday for Christmas and New Year. However, it has now restarted. So if you know of a software project that could do with some money and that Mozilla uses or relies on (note: that list is not exhaustive), now is the time to encourage them to apply. :-)

Gijs KruitboschWhy was Tab Groups (Panorama) removed?

Firefox 44 has been released, and it has started warning users of Tab Groups about its removal in Firefox 45. There were a number of reasons that led to the removal of Tab Groups. This post will aim to talk about each in a little bit more detail.

The removal happened in the context of “Great or Dead”, where we examine parts of Firefox, look at their cost/benefit balance, and sometimes decide to put resources into improving them, and sometimes decide to recognize that they don’t warrant that and remove that part of the browser.

For Tab Groups, here are some of the things we considered:

  • It had a lot of bugs. A number of serious issues relating to performance, a lot of its tests failed intermittently, group and window closing was buggy, as well as a huge pile of smaller issues: you couldn’t move tabs represented as large squares to groups represented as small squares; sometimes you could get stuck in it, or groups would randomly move ; and the list goes on – the quality simply wasn’t what it should be, considering we ship it to millions of users, which was part of the reason why it was hidden away as much as it was.
  • The Firefox team does not believe that the current UI is the best way to manage large numbers of tabs. Some of the user experience and design folks on our team have ideas in this area, and we may revisit “managing large numbers of tabs” at some point in the future. We do know that we wouldn’t choose to re-implement the same UI again. It wouldn’t make sense to heavily invest in a feature that we should be replacing with something else.
  • It was interfering with other important projects, like electrolysis (multi-process Firefox). When using separate processes for separate tabs, we need to make certain behaviours that used to be synchronous deal with being asynchronous. The way that Tab Groups’ UI was interwoven with the tabbed browser code, and the way the UI effectively hid all the tabs and showed thumbnails for all of them instead, made this harder for things like tab switching and tab closing.
  • It had a number of serious code architecture problems. Some of the animation and library choices caused intermittent issues for users as linked to earlier. All of the groups were stored with absolute pixel positions, creating issues if you change your window size, use a different screen or resolution, etc. When we added a warning banner to the bottom of the UI telling users we were going to remove it, that interfered with displaying search results. The code is very fragile.
  • It was a large feature. By removing tab groups we removed more than 24,000 lines of code from Firefox.

With all these issues in mind, we had to decide if it was better to invest in making it a great feature in Firefox, or remove the code and focus on other improvements to Firefox. When we investigated usage data, we found that only a extremely small fraction of Firefox users were making use of Tab Groups. Around 0.01%. Such low usage couldn’t justify the massive amount of work it would take to improve Tab Groups to an acceptable quality level, and so we chose to remove it.

If you use Tab Groups, don’t worry: we will preserve your data for you, and there are add-ons available that can make the transition completely painless.

Mike TaylorA quiz about ES2015 block-scoped function declarations (in a with block statement)

Quiz time, nerds.

Given the following bit of JS, what's the value of window.f when lol gets called outside of the with statement?

with (NaN) {
  window.f = 1;
  function lol(){window.f = 2};
  function lol(){window.f = 3};
}
lol()

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Trick question, your program crashed before it got to call lol()!

According to ES2015, it should be a SyntaxError, because you're redefining a function declaration in the same scope. Just like if you were re-declaring a let thingy more than once.

However, the real trick is that Chrome and Firefox will just ignore the first declaration so your program doesn't explode (for now, anyways). So the answer is really just 3 (which you probably guessed).

Double-tricked!

Not surprisingly there are sites out there that depend on this funky declared-my-function-twice-in-the-same-scope pattern. webex.com was one (bug here), but they were super cool and fixed their code already. The Akamai Media Player on foxnews.com (bug here) is another (classic foxnews.com move).

It would be really cool if browsers didn't have to do this, so if you know anybody who works on the Akamai Advanced Media Player, tell them to delete the second declaration of RecommendationsPlugin()? And if you see an error in Firefox that says redeclaration of block-scoped function 'coolDudeFunction' is deprecated, go fix that too — it might stop working one day.

Now don't forget to like and subscribe to my Youtube ES2016 Web Compat Pranks channel.

(The with(NaN){} bit isn't important, but it was lunch time when I wrote this.)

Darrin HeneinPrototyping Firefox Mobile (or, Designers Aren’t Magicians)

I like to prototype things. To me—and surely not me alone—design is how something looks, feels, and works. These are very hard to gauge just by looking at a screen. Impossible, some would argue. Designs (and by extension, designers) solve problems. They address real needs. During the creative process, I need to see them in action, to hold them in my hand, on the street and on the train. In real life. So, as often as not, I need to build something.

Recently we’ve been exploring a few interesting ideas for Firefox Mobile. One advantage we have, as a mobile browser, is the idea of context—we can know where you are, what time it is, what your network connection is like—and more so than on other platforms utilize that context to provide a better experience. Not many people shop at the mall or wait in line at the bank with their laptop open, but in many of Firefox’s primary markets, people will have their phone with them. Some of this context could help us surface better content or shortcuts in different situations… we think.

The first step was to decide on scope. I sketched a few of these ideas out and decided on which I would test as a proof of concept: location aware shortcut links, a grouped history view, and some attempt at time-of-day recommendations. I wanted to test these ideas with real data (which, in my opinion, is the only legitimate way to test features of this nature), so I needed to find a way to make my history and other browser data available to my prototype. This data is available in our native apps, so whatever form my prototype took it would need to have access to this data in some way. In many apps or products, the content is the primary focus of the experience, so making sure you shift from static/dummy content to real/dynamic content as quickly as possible is important. Hitting edge cases like ultra-long titles or poor-quality images are real problems your design should address, and these will surface far sooner if you’re able to see your design with real content.

Next I decided (quickly) on some technology. The only criteria here was to use things that would get me to a testable product as quickly as possible. That meant using languages I know, frameworks to take shortcuts, and to ask for help when I was beyond my expertise. Don’t waste time writing highly abstracted, super-modular code or designing an entire library of icons for your prototypes… take shortcuts, use open-source artwork or frameworks, and just write code that works.

I am most comfortable with web technologies—I do work at Mozilla, after all—so I figured I’d make something with HTML and CSS, and likely some Javascript. However, our mobile clients (Firefox for Android and Firefox for iOS) are written in native code. I carry an iPhone most days, so I looked at our iOS app, which is written in Swift. I figured I could swap out one of the views with a web view to display my own page, but I still needed some way to get my browsing data (history, bookmarks, etc.) down into that view. Turns out, the first step in my plan was a bit of a roadblock.

Thankfully, I work with a team of incredible engineers, and my oft-co-conspirator Steph said he could put something together later that week. It took him an afternoon, I think. Onward. Even if I thought I hack this together, I wasn’t sure, and didn’t want to waste time.

🔑 Whenever possible, use tools and frameworks you’ve used before. It sounds obvious, but I could tell you some horror stories of times where I wasted countless hours just trying to get something new to work. Save it for later.

In the meantime, I got my web stack all set up: using an off-the-shelf boilerplate for webpack and React (which I had used before), I got the skeleton of my idea together. Maybe overkill at this point, but having this in place would let me quickly swap new components in and out to test other ideas down the road, so I figured the investment was worth it. Because the location idea was not dependant on the users existing browser data, I could get started on that while Steph built the WebPanel for me.

Working for now in Firefox on the desktop, I used the Geolocation API to get the current coordinates of the user. Appending that to a Foursquare API url and performing a GET request, I now had a list of nearby locations. Using Lodash.js I filtered them to only include records with attached URLs, then sorted by proximity.


var query = "FoursquareAPI+MyClientID"

navigator.geolocation.getCurrentPosition(function(position){
  var ll = position.coords.latitude + "," + position.coords.longitude
  $.get(query + ll, function(data) {
    data = _.filter(data.response.venues, function(venue){ 
        return venue.url != null 
    })
    comp.setState({
      foursquareData: _.sortBy(data, function(venue){
        return venue.location.distance 
      })
  });
});

 

Step 1 of my prototype, done. Well, it worked in the desktop browser at least. I knew our mobile web view supported the same Geo API, so I was confident this would work there as well (and, it did).

At this point, Steph had built some stuff I could work with. By building a special branch of Firefox iOS, I now had a field in the settings app which let me define a URL which would load in one of my home panels instead of the default native views. One of the benefits of this approach is that I could update the web-app remotely and not have to rebuild/redeploy the native app with each change. And by using a tool like ngrok I could actually have that panel powered by a dev server running on my machine’s localhost.

Simulator Screen Shot Feb 3, 2016, 11.05.57 AM

Steph’s WebPanel.swift provided me with a simple API to query the native profile for data, seen here:


window.addEventListener("load", function () {
  webkit.messageHandlers.mozAPI.postMessage({
    method: "getSitesByLastVisit",
    params: {
      limit: 10000
    },
    callback: "receivedHistory"
  });
});

Here, I’m firing off a message to our mozAPI once the page has loaded, and passing it some parameters: the method I’d like to run and the limit on the number of records returned. Lastly, the name of a callback for the iOS app to pass the result of the query to.


window.receivedHistory = function(err, data) {
  store.dispatch(updateHistory(data));
}

This is the callback in my app, which just updates the flux store with the data passed from the native code.

At this point, I had a flux-powered app that could display native browser data through react views. This was enough to get going with, and let me start to build some of the UI.

Steph had stubbed out the API for me and was passing down a JSONified collection of history visits, including the URL and title for each visit. To build the UI I had in mind, however, I needed the timestamps and icons, too. Thankfully, I contributed a few hundred lines of Swift to Firefox 1.0, and could hack these in:


extension Site: DictionaryView {
    func toDictionary() -> [String: AnyObject] {
        let iconURL = icon?.url != nil ? icon?.url : ""
        return [
            "title": title,
            "url": url,
            "date": NSNumber(unsignedLongLong: (latestVisit?.date)!),
            "iconURL": iconURL!,
        ]
    }
}

Which gave me the following JSON passed to the web view:


[
  {
    title: “We’re building a better internet — Mozilla”,
    url: “http://mozilla.org”,
    date: “1454514630131”,
    iconUrl: “/media/img/favicon.52506929be4c.ico”
  },
  …
]

Firstly, try not to judge my Swift skills. The purpose here was to get it working as quickly as possible, not to ship this code. Hacks are allowed, and encouraged, when prototyping. I added a date and iconURL field to the history record object and before long, I was off to the races.

With timestamps and icons in hand, I could build the rest of the UI. A simple history view that batched visits by domain (so 14 Gmail links would collapse to 3 and “11 more…”), and a quick attempt at a time-based recommendation engine.

This algorithm may be ugly, but naively does one thing: depending on what time of day and day of the week it was, return to me some guesses of which sites I may be interested in (based on past browsing behaviour). It worked simply by following these steps:

  1. Filter my entire history to only include visits from the same day type (weekday vs. weekend)
  2. Exclude domains that are useless, like t.co or bit.ly
  3. Further filter the set of visits to only include visits +/- some buffer around the current time: the initial prototype used a buffer of +/- one hour
  4. Group the visits by their TLD + one level of path (i.e. google.com/document), which gave me better groups to work with
  5. Sort these groups by length, to provide an array with the most popular domain at the beginning (and limit this output to the top n domains, 10 in my case)

The output is similar to the following:


[
  {
    domain: “http://flickr.com/photos”,
    date: “Wed Feb 03 2016 10:54:02 GMT-0500”,
    count: 3
  },
  {
    domain: “www.dpreview.com/forums”,
    date: “Wed Feb 03 2016 10:54:02 GMT-0500”,
    count: 2
  },
  …
]

Awesome. Now I have a Foursquare-powered component at the top which lists nearby URLs. Below that, a component that shows me the 5 websites I visit most often around this time of day. And next, a component that shows my history in a slightly improved format, with domain-grouping and truncation of long lists of related visits. All with my actual data, ready for me to use this week and evaluate these ideas.

One problem surfaces, though. Any visits that are from another device (through Firefox Sync) have no icon attached to them (right now, we don’t sync favicons across devices), which leaves us with large series of visits with no icon. One of the hypotheses we want to confirm is that the favicon (among other visual cues) help the user parse and understand the lists of URLs we present them with.

🔑 Occasionally I’ll be faced with a problem like this: one where I know the desired outcome but have not tackled something like this before and so have low confidence in my ability to fix it quickly. I know I need some way to get icons for a set of URLs, but not how exactly that will work. At this point its crucial to remember one of the goals of a prototype: get to a testable artifact as quickly as possible. Often in this situation I’ll time box myself: if I can get something working in a few hours, great. If not, move on or just fake it (maybe having a preset group of icons I could assign at random would help address the question).

Again, I turn to my trusty toolbox, where I know the tools and how to use them. In this case that was Node and Express, and after a few hours I had an app running on Heroku with a simple API. I could POST an array of URLs to my endpoint /icons and my Node app would spin up a series of parallel tasks (using async.parallel). Each task would load the url via node’s request module, and would hand the HTML in the response over to cheerio, a server-side analog for jQuery. Using cheerio I could grab all the <meta> tags, check for a number of known values (‘icon’, ‘apple-touch-icon’, etc) and grab the URL associated with it. While I was there, I figured I might as well capture a few other tags, such as Facebooks OpenGraph og:image tag.

Once each of the parallel requests either completed or timed out, I combined all the extracted data into one JSON object and sent it back down the wire. A sample request may look like this:


POST to ‘/icons’

{ urls: [“facebook.com”] }

And the response would look like this (keyed to the URL so the app which requested it could associate icons/images to the right URL… the above array could contain any number of URLs, and the blow response would just have more top-level keys):


{
  “facebook.com”: {
    “icons”: [
      {
        “type”: “shortcut-icon”,
        “url”: “https://static.xx.fbcdn.net/rsrc.php/yV/r/hzMapiNYYpW.ico”
      }, …
    ],
    “images”: [
      {
        “type”: “og-image”,
        “url”: “http://images.apple.com/ca/home/images/og.jpg?201601060653”
      }, …
    ]
  }
}

Again, maybe not the best API design, but it works and only took a few hours. I added a simple in-memory cache so that subsequent requests for icons or images for URLs we’ve already fetched are returned instantly and with no delay. The entire Express app was 164 lines of Javascript, including all requires, comments, and error handling. It’s also generic enough that I can now use it for other prototypes where metadata such as favicons or lead images are needed for any number or URLs.

panel-blog

So why do all this work? Easy: because we have to. Things that just look pretty have become a commodity, and beyond being nice to look at they don’t serve much purpose. As designers, product managers, engineers—anyone who makes things—we are responsible for delivering real value to our users. Features and apps they will actually use, and when they use them, they will work well. They will work as expected, and even go out of their way to provide a moment of delight from time to time. It should be clear that the people who designed this “thing” actually used it. That it went through a number of iterations to get right. That it was no accident or coincidence that what you are holding in your hands ended up they way it is. That the designers didn’t just guess at how to solve the problem, but actually tried a few things to really understand it at a fundamental level.

Designers are problem solvers, not magicians. It is relatively cheap to pivot an idea or tweak an interface in the design phase, versus learning something in development (or worse, post-launch) and having to eat the cost of redesigning and rebuilding the feature. Simple ideas often become high-value features once their utility is seen with real use. Sometimes you get 95% of the way, and see how a minor revision can really push an idea across the finish line. And, realistically, sometimes great ideas on paper utterly flop in the field. Better to crash and burn in the hands of your trusted peers than out in the market, though.

Test your ideas, test them with real content or data, and test them with real people.

Air MozillaThe Joy of Coding - Episode 43

The Joy of Coding - Episode 43 mconley livehacks on real Firefox bugs while thinking aloud.

Laura de Reynal38 hours

TrainMapIndiaFrom Bangaluru to Ahmedabad, immersed into the train ecosystem for 38hours.
Caravan 2016 India-24Caravan 2016 India-33Caravan 2016 India-37Caravan 2016 India-49Caravan 2016 India-51
Caravan 2016 India-14Caravan 2016 India-21

Caravan 2016 India-441608km 37:45 hours

Filed under: Mozilla, Photography, Research Tagged: Connected Spaces, ethnography, India, journey, mobile office, research, train

Robert O'Callahanrr 4.1.0 Released

This release mainly improves replay performance dramatically, as I documented in November. It took a while to stabilize for release, partly because we ran into a kernel bug that caused rr tests (and sometimes real rr usage) to totally lock up machines. This release contains a workaround for that kernel bug. It also contains support for the gdb find command, and fixes for a number of other bugs.

Mozilla Addons BlogWebExtensions in Firefox 46

We last updated you on our progress with WebExtensions when Firefox 45 landed in Developer Edition (Aurora), and today we have an update for Firefox 46, which landed in Developer Edition last week.

While WebExtensions will remain in an alpha state in Firefox 46, we’ve made lots of progress, with 40 bugs closed since the last update. As of this update, we are still on track for a milestone release in Firefox 48 when it hits Developer Edition. We encourage you to get involved early with WebExtensions, since this is a great time to participate in its evolution.

A focus of this release was quality. All code in WebExtensions now pass eslint, and we’ve fixed a number of issues with intermittent test failures and timeouts. We’ve also introduced new APIs in this release that include:

  • chrome.notifications.getAll
  • chrome.runtime.sendMessage
  • chrome.webRequest.onBeforeRedirect
  • chrome.tabs.move

Create customizable views

In addition to the new APIs, support was added for second-level popup views in bug 1217129, giving WebExtension add-ons the ability to create customizable views.

Check out this example from the Whimsy add-on:
browser-action-1217129

Create an iFrame within a page

The ability to create an iFrame that is connected to the content script was added in bug 1214658. This allows you to create an iFrame within a rendered page, which gives WebExtension add-ons the ability to add additional information to a page, such as an in-page toolbar:

demo-1214658

For additional information on how to use these additions to WebExtensions, (and WebExtensions in general), please check out the examples on MDN or GitHub.

Upload and sign on addons.mozilla.org (AMO)

WebExtension add-ons can now be uploaded to and signed on addons.mozilla.org (AMO). This means you can sign WebExtension add-ons for release. Listed WebExtension add-ons can be uploaded to AMO, reviewed, published and distributed to Firefox users just like any other add-on. The use of these add-ons on AMO is still in beta and there are areas we need to improve, so your feedback is appreciated in the forum or as bugs.

Get involved

Over the coming months we will work our way towards a beta in Firefox 47 and the first stable release in Firefox 48. If you’d like to jump in to help, or get your APIs added, please join us on our mailing list or at one of our public meetings, or check out this wiki page.

Air MozillaWebdev Extravaganza: February 2016

Webdev Extravaganza: February 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.

Henrik SkupinFirefox Desktop automation goals Q1 2016

As promised in my last blog posts I don’t want to only blog about the goals from last quarters, but also about planned work and what’s currently in progress. So this post will be the first one which will shed some light into my active work.

First lets get started with my goals for this quarter.

Execute firefox-ui-tests in TaskCluster

Now that our tests are located in mozilla-central, mozilla-aurora, and mozilla-beta we want to see them run on a check-in basis including try. Usually you will setup Buildbot jobs to get your wanted tasks running. But given that the build system will be moved to Taskcluster in the next couple of months, we decided to start directly with the new CI infrastructure.

So how will this look like and how will mozmill-ci cope with that? For the latter I can say that we don’t want to run more tests as we do right now. This is mostly due to our limited infrastructure I have to maintain myself. Having the needs to run firefox-ui-tests for each check-in on all platforms and even for try pushes, would mean that we totally exceed the machine capacity. Therefore we continue to use mozmill-ci for now to test nightly and release builds for en-US but also a couple of other locales. This might change later this year when mozmill-ci can be replaced by running all the tasks in Taskcluster.

Anyway, for now my job is to get the firefox-ui-tests running in Taskcluster once a build task has been finished. Although that this can only be done for Linux right now it shouldn’t matter that much given that nothing in our firefox-puppeteer package is platform dependent so far. Expanding testing to other platforms should be trivial later on. For now the primary goal is to see test results of our tests in Treeherder and letting developers know what needs to be changed if e.g. UI changes are causing a regression for us.

If you are interested in more details have a look at bug 1237550.

Documentation of firefox-ui-tests and mozmill-ci

We are submitting our test results to Treeherder for a while and are pretty stable. But the jobs are still listed as Tier-3 and are not taking care of by sheriffs. To reach the Tier-2 level we definitely need proper documentation for our firefox-ui-tests, and especially mozmill-ci. In case of test failures or build bustage the sheriffs have to know what’s necessary to do.

Now that the dust caused by all the refactoring and moving the firefox-ui-tests to hg.mozilla.org settles a bit, we want to start to work more with contributors again. To allow an easy contribution I will create various project documentation which will show how to get started, and how to submit patches. Ultimately I want to see a quarter of contribution project for our firefox-ui-tests around mid this year. Lets see how this goes…

More details about that can be found on bug 1237552.

Christian HeilmannAll the small things at Awwwards Amsterdam

Last week, I cut my holiday in the Bahamas short to go to the Awwwards conference in Amsterdam and deliver yet another fire and brimstone talk about performance and considering people outside of our sphere of influence.

me at awwwardsPhoto by Trine Falbe

The slides are on SlideShare:

The screencast of the talk is on YouTube:

I want to thank the organisers for allowing me to vent a bit and I was surprised to get a lot of good feedback from the audience. Whilst the conference, understandably, is very focused on design and being on the bleeding edge, some of the points I made hit home with a lot of people.

Especially the mention of Project Oxford and its possible implementations in CMS got a lot of interest, and I’m planning to write a larger article for Smashing Magazine on this soon.

Air MozillaMartes mozilleros, 02 Feb 2016

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Just BrowsingEncapsulation in Redux: the Right Way to Write Reusable Components

Encapsulation in Redux: the Right Way to Write Reusable Components

Encapsulation in Redux: the Right Way to Write Reusable Components

Redux is one of the most discussed JavaScript front-end technologies. It has defined few paradigms that vastly improve developer efficiency and give us a solid foundation for building apps in a simple yet robust way. No framework is perfect though, and even Redux has its drawbacks.

Redux has the special power make you feel like a super-programmer who can effortlessly write flawless code. Enjoyable though this may be, it's actually a huge issue because it blinds you with its simplicity and may therefore lead to situations where your shiny code turns into an unmaintainable mess. This is especially true for those who are starting a new project without any prior experience with Redux.

Encapsulation Matters

Global application state is arguably the biggest benefit of Redux. On the other hand, it is a bit of a mind-bender because we were always told that global stuff is evil. Not necessarily: there is nothing wrong with keeping the entire application state snapshot in one place, as long as you don't break encapsulation. Unfortunately, with Redux this is all too easy.

Just imagine a simple case where we have a team of two people: Dan and Andre. Both of them were given the very simple task to implement an independent component. Dan is supposed to create Foo, and Andre is supposed to create Bar. Both of the components are similar in that they should contain only a button which acts as a toggle for some string. (Obviously this is an artificial example provided for illustrative purposes.)

Encapsulation in Redux: the Right Way to Write Reusable Components

Since the state of each Component is distinct, and there are no dependencies between those them, we can encapsulate them simply by using combineReducers. This gives each reducer its own slice of application state, so it can only access the relevant part: fooReducer will get foo and barReducer will get bar. Andre and Dan can work independently, and they can even open-source the components because they are completely self-contained.

But one day, some evil person (from marketing, no doubt) tells Andre that he must extend the app and should embed a Qux component in his Bar. Qux is a simple counter with a button that just increments a number when clicked. So far so good: Andre just needs to embed the Qux in his Bar and provide an isolated application state slice so that Bar does not know anything about the internal implementation of Qux. This ensures that the principle of encapsulation is not violated. The application state shape could look like this:

{
  foo: {
    toggled: false
  },
  bar: {
    toggled: false,
    qux: {
      counter: 0
    }
  }
}

However, there was one more requirement: Andre should also cooperate with Dan, because the counter must be incremented anytime Foo is toggled. Now it's getting more complicated, and it looks like some sort of interface will be needed.

Encapsulation in Redux: the Right Way to Write Reusable Components

Now we can see the problem with the way Redux blinds developers to potential architectural missteps. People tend to seize on the fact that Redux provides global application state. The simplest solution would therefore be to get rid of combineReducers and provide the entire application state to fooReducer, which can increment counter internally in the bar branch of the app state tree. Naturally this is totally wrong and breaks the principle encapsulation. You never want to do this because the logic hidden behind incrementing the counter may be much more complicated than it seems. As a result, this solution does not scale. Anytime you change the implementation of qux, you'll need to change the implementation of foo as well, which is a maintainability nightmare.

"But wait!" I hear you saying. The whole point of Redux is that you can handle a given action in multiple reducers, right? This suggests that we should handle the TOGGLE_FOO action in quxReducer and increment the counter there. This is a bad idea for a couple of reasons.

For starters, Qux becomes coupled with Foo because it needs to know about its internal details. More importantly, Reto Schläpfer makes a compelling case that it quickly becomes difficult to reason about code where the results of an action are spread across the codebase. It is much better to compose your reducers so that a higher-level reducer handles each action is a single place and delegates processing to one or more lower-level reducers.

Composition is the New Encapsulation

As you might have spotted, we are suggesting that in addition to composing components and composing state, we should compose reducers as well.

Concretely, this means that Andre needs to define a public interface for his Qux component: an incrementClicks function.

export const incrementClicks = quxState => ({...quxState, clicked: quxState.clicked + 1});  

This implementation is completely encapsulated and is specific to the Qux component. The function is exported because it is part of the public interface. Now we use this function to implement the reducer:

const quxReducer = (quxState = {clicked: 0}, { type }) => {  
  switch (type) {
    case 'QUX_INCREMENT':
      return incrementClicks(quxState); // The public method is called
    default:
      return quxState;
  }
};

Because Bar embeds Qux's application state slice, barReducer is responsible for delegating all actions to quxReducer using straightforward function composition. The barReducer might look something like this:

const barReducer = (barState = {toggled: false}, action) => {  
  const { type } = action;

  switch (type) {
    case 'TOGGLE_BAR':
      return {...barState, toggled: !barState.toggled};
    default:
      return {
        ...barState,
        qux: quxReducer(barState.qux, action) // Reducer composition
      };
  }
};

Now we are ready for some more advanced reducer composition, since we know that Qux should also increment when Foo is toggled. At the same time, we want Foo to be completely unaware of Qux's internals. We can define a top-level reducer that holds state for Foo and Bar and delegates the right portion of the app state to incrementClicks. Only rootReducer, which aggregates fooReducer and barReducer, will be aware of the interdependency between the two.

const rootReducer = (appState = {}, action) => {  
  const { type } = action;

  switch (type) {
    case 'TOGGLE_FOO':
      return {
        ...appState,
        foo: fooReducer(appState.foo, action),
        bar: {...appState.bar, qux: incrementClicks(appState.bar.qux)}
      };

    default:
      return {
        ...appState,
        foo: fooReducer(appState.foo, action),
        bar: barReducer(appState.bar, action)
      };
  }
};

The default case acts exactly like combineReducers. The TOGGLE_FOO handler, on the other hand, glues the reducers together while handling the interdependency. There is still one line which is pretty ugly:

bar: {...appState.bar, qux: incrementClicks(appState.bar.qux)}  

The problem is that this line depends on implementation details of Bar (state shape). We would rather encapsulate these details behind the public interface of the Bar reducer:

export const incrementClicksExposedByBar = barState => ({...barState, qux: incrementClicks(barState.qux)});  

And now it's just a matter of using the public interface:

    ...
    case 'TOGGLE_FOO':
      return {
        ...appState,
        foo: fooReducer(appState.foo, action),
        bar: incrementClicksExposedByBar(appState.bar)
      };

I've prepared a complete working codepen so that you can try this out on your own.

Armen ZambranoEnd of MozCI QoC term

We recently completed another edition of Quarter of Contribution and I had the privilege to work with MikeLingF3real & xenny.
I want to take a moment to thank all three of you for your hard work and contributions! It was a pleasure to work together with you during this period.

Some of the highlights of this term are:

You can see all other mozci contributions in here.

One of the things I learned from this QoC term:
  • Prepare sets of issues that are related which build towards a goal or a feature.
    • The better you think it through the easier it will be for you and the contributors
    • In GitHub you can create milestones of associated issues
  • Remind them to review their own code.
    • This is something I try to do for my own patches and saves me from my own embarrassment :)
  • Put it on the contributors to test their code before requesting formal review
    • It forces them to test that it does what they expect it to do
  • Set expectations for review turn around.
    • I could not be reviewing code every day since I had my own personal deliverables. I set Monday, Wednesday and Friday as code review days.
It was a good learning experience for me and I hope it was beneficial for them as well.



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1243246] Attachment data submitted via REST API must always be base64 encoded
  • [1213424] The Bugzilla autocomplete dropdown should expand the width to show the full text of a match
  • [1241667] Trying to report a bug traps the user in an infinite loop
  • [1188236] “Congratulations on having your first patch approved” email should be clearer about how to get the patch landed.
  • [1243051] Create one off script to output cpanfile with all modules and their current versions to be used for version pinning
  • [1244604] configure nagios alerting for the bmo/reviewboard connector
  • [1244996] add a script to manage a user’s settings

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Hannah KaneOur Evolving Approach to the Curriculum Database

When we first envisioned the Curriculum Database, we had a variety of different focuses. The biggest one, in my mind, was helping people sort through the many different types of learning materials offered by various teams and projects at Mozilla in order to find the right content for their needs. Secondarily, we wanted to improve the creation process—make it easier to publish, remix, and localize content.

Since then, the vision has changed a bit, or rather, our approach has changed. Several of our designers (Luke, Sabrina, and Ricardo) have been conducting a small research project to find out how educators currently find learning materials. We’ve also been learning from conversations with the cross-team Curriculum working group, led by Chad Sansing. Finally, we’ve been looking at existing offerings in the world.

Content Creation

Our approach has evolved so that now we’re focusing on content creation first. Luke and Chad have been working together to improve the web literacy curriculum template.

Screen Shot 2016-02-01 at 3.28.59 PM

Luke also worked with the Science Lab team to design new templates for their learning handouts:

Screen Shot 2016-02-01 at 3.30.19 PM

Next up is working with the Mozilla Clubs team to create new templates for their onboarding and training materials. The goal with each of these projects is to provide scalable, reusable templates that, once created and refined, don’t require involvement from designers or developers to continue using as new content is created. Right now we’re using a combination of Thimble projects, markdown -> html conversion, and github pages. We’re in experimentation mode.

Editing, localization, quality control, and more

Some of the next hurdles to cross are around mechanisms for localization and allowing for community contribution. One thing I’ve heard from the Curriculum working group is that collaborative editing workflows are needed. One idea I really like came from Emma Irwin who suggested an “Ask for Help” workflow that triggers a ticket in a github repo. That repo is triaged and curated by community contributors, so this could be a useful way to solicit testing and edits on new content.

I’ve also heard a (resolvable) tension between wanting to encourage remixes, but also wanting to have clear quality control mechanisms. User ratings and comments can help with this. Emma has also suggested an Education Council that would review materials and mark those meeting certain criteria as “Mozilla Approved.”

And then what?

All of the above ideas fall under the “content creation” half of the puzzle. We’ve not forgotten about the “content discovery” half, but we’ve put creation first. Once we’ve developed and tested these new workflows for creation, collaborative editing, remixing, quality control, and localization, it will make sense to then switch our focus to content distribution and discovery.

Please do comment below if you have any questions or ideas about this approach.


Mozilla Open Policy & Advocacy BlogAnnouncing the 2016 Open Web Fellows Program Host Organizations

Last year was a big year for the open Web: net neutrality became a mainstream phrase in the United States, data retention and surveillance were hotly contested at government levels in the European Union, and India’s government suspended operations of Free Basic’s zero-rating practices despite Mark Zuckerberg’s insistence that he was working in the interest of the poor. Much of this was done in collaboration with organizations that share the mission to protect the open Web as a global public resource. It’s partnerships and knowledge sharing initiatives that support these movements.

Once such initiative is the Ford-Mozilla Open Web Fellows program, an international leadership program that brings together technology talent and civil society organizations to advance and protect the open Web. The Fellows embedded at these organizations will work on salient issues like privacy, access, and online rights. And this Fellowship program offers unique opportunities to learn, innovate, and gain credentials in a supportive environment while working to protect the open Web.

We are proud to announce our second cohort of host organizations, who are looking for 8 talented individuals to advise, build, and learn during their 10-month fellowships.

Apply now to become a Ford-Mozilla Open Web Fellow!
Deadline for applications: 11:59pm PST March 20, 2016


Centre for Intellectual Property and Information Technology Law (CIPIT)
CIPIT is an evidence-based research and trainingcenter based at Strathmore Law School in Nairobi, Kenya. Working with communities in extreme stances of censorship, their mission is to study and share knowledge on the development of cyberspace, and conduct research from a multidisciplinary approach. In 2016 CIPIT will be focusing on Internet Freedom in Eastern Africa, intellectual property in African development, and network measurements in election monitoring.

CIPIT is looking for an inquisitive, focused Fellow with tech expertise who can consult on a policy-oriented research process. This Fellow could help shape the next generation of Internet laws in Africa, and see the real-life needs of the tools and code they generate. For example, the Fellow could develop user-focused tools that help real-life events – like the Ugandan election. Learn more here.


Citizen Lab
Citizen Lab is an interdisciplinary laboratory based at the Munk School of Global Affairs, University of Toronto that focuses on advanced research and development at the intersection of ICTs, human rights, and global security. They provide impartial,evidence-based, peer-reviewed research on information controls to help advocacy and policy engagement on an open and secure Internet, and help secure civil society organizations from targeted attacks.

Citizen Lab is looking for a Fellow who is motivated to apply their technical skills to questions concerning technology and human rights, and brings excellent communications and technical skills. The Fellow could develop new tools to measure Internet filtering and network interference, investigate malware attacks or the privacy and security of apps and social media, and empower citizens by developing platform for corporate and public transparency. Learn more here.


ColorOfChange
ColorOfChange is a leading civil rights organization that works to strengthen the voice of Black America and create positive change around political and social issues that affect the Black community. ColorOfChange supports net neutrality and the reclassification of broadband as a public utility, and works to give their members a voice — hugely consequential, as Black and brown Americans are least able to afford the paybooths and obstacles that come with a closed Internet.

ColorOfChange is looking for a Fellow who is passionate about ensuring the US national conversation around net neutrality includes arguments in favor of net neutrality from a civil rights perspective. This Fellow would have the opportunity to pioneer tools for rapid-response campaigning that could be replicated and used by millions, find a compelling approach for users to engage with data that is integrated in the presentation itself, leverage mobile (and wearables??) for activism. Learn more here.


Data & Society
Data & Society is a research institute that is committed to identifying issues at the intersection of technology and society. They focus on social, cultural, and ethical issues arising from data-centric technological development. In 2016, they will focus on identifying major emergent issues stemming from new data-driven technologies, develop tools to help people better understand issues, and build a diverse network of researchers and practitioners.

Data & Society is looking for a Fellow who is deeply versed in technical conversations, and understands that new massive technologies are creating disruption. This Fellow would work with people from other fields to raise the technical capacity of others in the network, and engage technical communities core to Data & Society’s mission. Learn more here.


Derechos Digitales
Derechos Digitales is an organization that promotes human rights in digital environments. Their work focuses on the nuanced realities of Latin American countries, and bring these perspectives to discussions around issues like cybersecurity and corporate transparency. They work to shape policy-making on issues such as mass surveillance, digital threats to activists, and legislative work on Internet governance. In 2016 they will focus on privacy, freedom of expression and access to knowledge.

Derechos Digitales is looking for a Fellow with tech expertise who is passionate about working at the intersection of human rights and tech policy in the global south. The Fellow could provide technical advise on the tools and resources needed in these contexts, and develop tech policy documents that can bridge the human rights and tech communities. Derechos Digitales is looking for a Spanish-speaking Fellow who would be comfortable supporting capacity building sessions with local civil society organizations. Learn more here.


European Digital Rights (EDRi)
EDRi is an association of 33 civil rights organizations from across Europe, and works to promote, protect and uphold civil and human rights in the digital environment in the European Union. Their four key priorities for 2016 are data protection and privacy, mass surveillance, copyright reform and net neutrality. EDRi supports Europe’s data protection reform and campaigned against EU state surveillance proposals. The current onslaught of “counter-terrorism” proposals after recent attacks sees European governments adopting new laws with little consideration of effectiveness, proportionality, or whether privacy is being sacrificed.

EDRi is looking for a Fellow who is passionate about raising awareness about EU digital rights, and can use their technical expertise to help educate the general public, tech-policy community, and policy-makers. For example, the Fellow could explain existing data collection practices and newly gained online rights to users via an app or other tool, depending on the Fellow’s talents and preferences. The Fellow could provide technical assistance to help policy-makers and regulators understand the tools used by online companies for tracking and monitoring. Learn more here.


Freedom of the Press Foundation
Freedom of the Press Foundation is a non-profit organization that supports and defends journalism dedicated to transparency and accountability. They believe one of the most critical press freedom issues of the 21st Century is digital security, and work to ensure journalists can use technology to do their jobs safely and without the constant fear of surveillance.

Freedom of the Press Foundation is looking for a Fellow with strong technical abilities and is interested in helping journalists work safely and communicate securely.  The Fellow would apply their skills to build and support tools like SecureDrop with Freedom of the Press Foundation’s talented staff of technologists and engineers that help journalists communicate securely with sources and whistleblowers. Learn more here.


Privacy International
Privacy International focuses on privacy issues around the world. They advocate for strong privacy protection laws, investigate government surveillance, conduct research to enact policy change, and raise awareness amongst the public about technologies that place privacy at risk. In 2016 Privacy International is working partnering with organizations in the global south to identify privacy challenges, and more work on data exploitation.

Privacy International is looking for a Fellow who’s eager to learn and find new challenges. The Fellow would use their strong technical skills to translate technology to policy-makers, and help others around the world do the same. The Fellow would work with Privacy International’s Tech Team to analyze surveillance documentation and data, identify and analyze new technologies, and help develop briefings and educational programming with a technical understanding. Learn more here.


Apply now to become a Ford-Mozilla Open Web Fellow!
Deadline for applications: 11:59pm PST March 20, 2016

John O'Duinn“Distributed” ER#5 now available!

“Distributed” Early Release #5 is now publicly available, just Book Cover for Distributed23 days after ER#4 came out.

Early Release #5 (ER#5) contains everything in ER#4 plus:
* Ch.12 group chat etiquette
* In Ch.2, the section on diversity was rewritten and enhanced by consolidating a few different passages that had been scattered across different chapters.
* Many tweaks/fixes to pre-existing Chapters.

You can buy ER#5 by clicking here, or clicking on the thumbnail of the book cover. Anyone who already has ER#1, ER#2, ER#3 or ER#4 should get prompted with a free update to ER#5 – if you don’t please let me know! And yes, you’ll get updated when ER#6 comes out later this month.

Thanks again to everyone for their ongoing encouragement, proof-reading help and feedback so far. I track all of them and review/edit/merge as fast as I can. To make sure that any feedback doesn’t get lost or caught in spam filters, please email comments to feedback at oduinn dot com.

Now time to brew more coffee and get back to typing!

John.
=====
ps: For the curious, here is the current list of chapters and their status:

Chapter 1: Remoties trend – AVAILABLE
Chapter 2: The real cost of an office – AVAILABLE
Chapter 3: Disaster Planning – AVAILABLE
Chapter 4: Mindset – AVAILABLE
Chapter 5: Physical Setup – AVAILABLE
Chapter 6: Video Etiquette – AVAILABLE
Chapter 7: Own your calendar – AVAILABLE
Chapter 8: Meetings – AVAILABLE
Chapter 9: Meeting Moderator – AVAILABLE
Chapter 10: Single Source of Truth
Chapter 11: Email Etiquette – AVAILABLE
Chapter 12: Group Chat Etiquette – AVAILABLE
Chapter 13: Culture, Trust and Conflict
Chapter 14: One-on-Ones and Reviews – AVAILABLE
Chapter 15: Joining and Leaving
Chapter 16: Bring Humans Together – AVAILABLE
Chapter 17: Career path – AVAILABLE
Chapter 18: Feed your soul – AVAILABLE
Chapter 19: Final Chapter

=====

Mozilla Addons BlogFirefox Accounts on AMO

In order to provide a more consistent experience across all Mozilla products and services, addons.mozilla.org (AMO) will soon begin using Firefox Accounts.

During the first stage of the migration, which will begin in a few weeks, you can continue logging in with your current credentials and use the site as you normally would. Once you’re logged in, you will be asked to log in with a Firefox Account to complete the migration. If you don’t have a Firefox Account, you can easily create one during this process.

Once you are done with the migration, everything associated with your AMO account, such as add-ons you’ve authored or comments you’ve written, will continue to be linked to your account.

A few weeks after that, when enough people have migrated to Firefox Accounts, old AMO logins will be disabled. This means when you log in with your old AMO credentials, you won’t be able to use the site until you follow the prompt to log in with or create a Firefox Account.

For more information, please take a look at the Frequently Asked Questions below, or head over to the forums. We’re here to help, and we apologize for any inconvenience.

Frequently asked questions

What happens to my add-ons when I convert to a new Firefox Account?

All the add-ons are accessible to the new Firefox Account.

Why do I want a Firefox Account?

Firefox Accounts is the identity system that is used to synchronize Firefox across multiple devices. Many Firefox products and services will soon begin migrating over, simplifying your sign-in process and making it easier for you to manage all your accounts.

Where do I change my password?

Once you have a Firefox Account, you can go to accounts.firefox.com, sign in, and click on Password.

If you have forgotten your current password:

  1. Go to the AMO login page
  2. Click on I forgot my password
  3. Proceed to reset the password

The Servo BlogThis Week In Servo 49

In the last week, we landed 87 PRs in the Servo organization’s repositories.

Mátyás Mustoha has been doing awesome work bringing fixes and stability to our cross-compilation to both AArch64 and ARM 32-bit. He has a nightly build that runs here, which we hope to integrate into our own CI systems. Thanks for your great work improving the experience targeting those platforms!

Notable Additions

  • pcwalton removed some potential deadlock situations in ipc-channel
  • ms2ger moved gaol (our sandboxing library) to the Servo organization and enabled CI support for it
  • manish upgraded homu to pick up a bunch of new updates, including UI cleanups!
  • nox upgraded our Rust build to January 31st
  • mmatyas added AArch64 support to gaol, and generally cleaned it up in other repos, too
  • larsberg added some instructions on how to build and run Servo on Windows
  • simon landed CSS Multicolumn support with block fragmentation

New Contributors

Screenshot

No screenshot this week.

Meetings

We had a meeting on changing our weekly meeting time, our build time trend, potential student projects, and the Windows support.

Mitchell BakerDr. Karim Lakhani Appointed to Mozilla Corporation Board of Directors

As we just posted on the Mozilla Blog, today we are very pleased to announce an addition to the Mozilla Corporation Board of Directors, Dr. Karim Lakhani, a scholar in innovation theory and practice. Dr. Lakhani is the first of the new appointments we expect to make this year. We are working to expand our […]