Kartikaya GuptaFirewalling, part 2

I previously wrote about setting up multiple VLANs to segment your home network and improve the security characteristics. Since then I've added more devices to my home network, and keeping everything in separate VLANs was looking like it would be a hassle. So instead I decided to put everything into the same VLAN but augment the router's firewall rules to continue restricting traffic between "trusted" and "untrusted" devices.

The problem is that didn't work. I set up all the firewall rules but for some reason they weren't being respected. After (too much) digging I finally discovered that you have to install the kmod-ebtables package to get this to actually work. Without it, the netfilter code in the kernel doesn't filter traffic between hosts on the same VLAN and so any rules you have for that get ignored. After installing kmod-ebtables my firewall rules started working. Yay!

Along the way I also discovered that OpenWRT is basically dead now (they haven't had a release in a long time) and the LEDE project is the new fork/successor project. So if you were using OpenWRT you should probably migrate. The migration was relatively painless for me, since the images are compatible.

There's one other complication that I've run into but haven't yet resolved. After upgrading to LEDE and installing kmod-ebtables, for some reason I couldn't connect between two FreeBSD machines on my network via external IP and port forwarding. The setup is like so:

  • Machine A has internal IP address 192.168.1.A
  • Machine B has internal IP address 192.168.1.B
  • The router's external IP address is E
  • The router is set to forward port P to machine A
  • The router is set to forward port Q to machine B

Now, from machine B, if connect to E:P, it doesn't work. Likewise, from machine A, connecting to E:Q doesn't work. I can connect using the internal IP address (192.168.1.A:P or 192.168.1.B:Q) just fine; it's only the via the external IP that it doesn't work. All the other machines on my network can connect to E:P and E:Q fine as well. It's only machines A and B that can't talk to each other. The thing A and B have in common is they are running FreeBSD; the other machines I tried were Linux/OS X.

Obviously the next step here is to fire up tcpdump and see what's going on. Funny thing is, when I run tcpdump on my router, the problem goes away and the machines can connect to each other. So there's that. I'm sure with more investigation I'll get to the bottom of this but for now I've shelved it under "mysteries that I can work around easily". If anybody has run into this before I'd be interested in hearing about it.

Also if anybody knows of good tools to visualize and debug iptables rules I'd be interested to try them out, because I haven't found anything good yet. I've been using the counters in the tables to try and figure out which rules the packets are hitting but since I'm debugging this "live" there's a lot of noise from random devices and the counters are not as reliable as I'd like.

Robert KaiserCelebrating LCARS With One Last Theme Release

30 years ago, a lot of people were wondering what the new Star Trek: The Next Generation series would bring when it would debut in September 1987. The principal cast had been announced, as well as having a new Enterprise and even the pilot's title was known, but - as always with a new production - a lot of questions were open, just like today in 2017 with Star Trek Discovery, which is set to debut in September almost to the day on the 30th anniversary of The Next Generation.

Given that the story was set to play 100 years after the original and what was considered "futuristic" had significantly changed between the late 1960s and 1980s, the design language had to be significantly updated, including the labels and screens on the new Enterprise. Scenic art supervisor and technical consultant Michael Okuda, who had done starship computer displays for The Voyage Home, was hired to do those for the new series, and was instructed by series creator and show runner Gene Roddenberry that this futuristic ship should have "simple and clean" screens and not much animation (the latter probably also due to budget and technology constraints - the "screens" were built out of colored plexiglass with lights behind them).



With that, Okuda created a look that became known as "LCARS" (for Library Computer Access and Retrieval System (which actually was the computer system's name). Instead of the huge gray panels with big brightly-colored physical buttons in the original series, The Next Generation had touch-screen panels with dark background and flat-style buttons in pastel color tones. The flat design including the fonts and flat-design frames are very similar to quite a few designs we see on touch-friendly mobile apps 30 years later. Touch screens (and even cell phones and tablets) were pretty much unheard of and "future talk" when Mike Okuda created those designs, but he came to pretty similar design conclusions as those who design UIs for modern touch-screen devices (which is pretty awesome when you think of it).

I was always fascinated with that style of UI design even on non-touch displays (and am even more so now that I'm using touch screens daily), and so 18 years ago, when I did my first experiments with Mozilla's new browser-mail all-in-one package and realized that the UI was displayed with the same rendering engine and the same or very similar technologies as websites, I immediately did some CSS changes to see if I could apply LCARS-like styling to this software - and awesomeness ensued when I found out that it worked!

Image No. 23114

Over the years, I created a full LCARStrek theme from those experiments (first release, 0.1, was for Mozilla suite nightlies in late 2000), adapted it to Firefox (starting with LCRStrek 2.1 for Firefox 4), refined it and even made it work with large Firefox redesigns. But as you may have heard, huge changes are coming to Firefox add-ons, and full-blown themes in a manner of LCARStrek cannot be done in the new world as it stands right now, so I'm forced to stop developing this theme.

Image No. 23308

Given that LCARS has a huge anniversary this year, I want to end my work on this theme on a high instead of a too sad a note though, so right along the very awesome Star Trek Las Vegas convention, which just celebrated 30 years of The Next Generation, of course, I'm doing one last LCARStrek release this weekend, with special thanks to Mike Okuda, whose great designs made this theme possible in the first place (picture taken by myself at that convention just two weeks ago, where he was talking about the backlit LCARS panels that were dubbed "Okudagrams" by other crew members):
Image No. 23314

Live long and prosper!

Robert KaiserLantea Maps: GPS Track Upload to OpenStreetMap Broken

During my holidays, when I was using Lantea Maps daily to record my GPS tracks, I suddenly found out one day that upload of the tracks to OpenStreetMap was broken.

I had added that functionality so that people (including myself) could get their GPS tracks out of their mobile devices and into a place from which they can download them anywhere. A bonus was that the tracks were available to the OpenStreetMap project as guides to improve the maps.

After I had wasted about EUR 50 of data roaming costs to verify that it was not only broken on hotel networks but also my mobile network that usually worked, I tried on a desktop Nightly and used the Firefox devtools to find out the actual error message, which was a CORS issue. I filed a GitHub issue but apparently it was an intentional change and OpenStreetMap doesn't support GPS track uploads any more in a way that is simple for pure web apps and also doesn't want to re-add support for that. Find more details in the GitHub issue.

Because of that, I think that this will mark the end of uploading tracks from Lantea Maps to OpenStreetMap. When I have time, I will probably add a GPS track store on my server instead, where third-party changes can't break stuff while I'm on vacation. If any Lantea Maps user wants their tracks on OpenStreetMap in the future, they'll need to manually upload the tracks themselves.

J.C. JonesThe State of CRLs Today

Certificate Revocation Lists (CRLs) are a way for Certificate Authorities to announce to their relying parties (e.g., users validating the certificates) that a Certificate they issued should no longer be trusted. E.g., was revoked.

As the name implies, they're just flat lists of revoked certificates. This has advantages and disadvantages:

Advantages:

  • It's easy to see how many revocations there are
  • It's easy to see differences from day to day
  • Since processing the list is up to the client, it doesn't reveal what information you're interested in

Disadvantages:

  • They can quickly get quite big, leading to significant latency while downloading a web page
  • They're not particularly compressible
  • There's information in there you probably will never care about

CRLs aren't much used anymore; Firefox stopped checking them in version 28 in 2014, in favor of online status checks (OCSP).

The Baseline Requirements nevertheless still require that CRLs, if published, remain available:

4.10.2 Service availability

The CA SHALL operate and maintain its CRL and OCSP capability with resources sufficient to provide a response time of ten seconds or less under normal operating conditions.

Since much as been written about the availability of OCSP, I thought I'd check-in on CRLs.

Collecting available CRLs

When a certificate's status will be available in a CRL, that's encoded into the certificate itself (RFC 5280, 4.2.1.13). If that field is there, we should expect the CRL to survive for the lifetime of the certificate.

I went to Censys.io and after a quick request to them for SQL access, I ran this query:

SELECT parsed.extensions.crl_distribution_points  
   FROM certificates.certificates
WHERE validation.nss.valid = true  
   AND parsed.extensions.crl_distribution_points LIKE 'http%' 
   AND parsed.validity.end >= '2017-07-18 00:00'
GROUP BY parsed.extensions.crl_distribution_points  

Today, this yields 3,035 CRLs, the list of which I've posted on Github.

Downloading those CRLs into a directory downloaded_crls can be done serially using wget quite simply, logging to a file named wget_log-all_crls.txt:

mkdir downloaded_crls  
script wget_log-all_crls.txt wget --recursive --tries 3 --level=1 --force-directories -P downloaded_crls/ --input-file=all_crls.csv  

This took 2h 36m 31s on my Internet connection.

Analyzing the Download Process

Out of 3,035 CRLs, I ended up downloading 2,993 files. The rest failed.

I post-processed the command line wget log (wget_log-all_crls.txt) using a small Python script to categorize each CRL download by how it completed.

Ignoring all the times when requesting a file resulted in the file straightaway (hey, those cases are boring), here's the graphical breakdown of the other cases:

Problems with CRL Downloads

Missing CRLs

There are 40 CRLs that weren't available to me when I checked, or more simply put, 1% of CRLs appear to be dead.

Some of them are dead in temporary-looking ways, like the load balancer giving a 500 Internal Server Error, some of them have hostnames that aren't resolving in DNS.

These aren't currently resolving for me:

Searching Censys' dataset, these CRLs are only used by intermediate CAs, so presumably if one of the handful of CA certificates covered would need to be revoked, their IT staff could fix these links.

Except for http://atospki/, which is clearly an internal name. Mistakes like that can only be revoked via technologies like OneCRL and CRLSets.

The complete list of 400s, 404s, and timeouts by URL is available in crl_resolutions.csv.

Are the missing CRLs a problem?

This doesn't attempt to eliminate possible false-positives where the CRL was for a certificate which is revoked by its parent. For example, if there is a chain Root -> A -> B -> C, and A is revoked, it may not be important that A's CRL exist. (Thanks, @sleevi for pointing this out!)

Redirects

As could be expected, there were a fair number of CRLs which are now serviced by redirects. Interestingly, while section 7.1.2.2(b) of the Baseline Requirements require CRLs to have a "HTTP URL", 13 of the CRL fetches redirect to HTTPS, two of them through HSTS headers [1].

There was a recent thread on Mozilla.dev.security.policy about OCSP responders that were only available over HTTPS; these are problematic as OCSP and CRLs are used to decide whether a secure connection is possible. Having to make such a determination for the revocation check leads to a potential deadlock, so most software will refuse to try it.

Interestingly, there's one CRL that is encoded as HTTPS directly in certificates: https://crl.firmaprofesional.com/fproot.crl [Censys.io search][Example at crt.sh] That's pretty clearly a violation of the Baseline Requirements.

Sizes

I've generally understood that most CRLs are small, but some are very large, so I expected some kind of bi-modal distribution. It's really not, though the retrieved CRLs do have a wild size distribution:

Size Distribution of CRLs

In table form [2]:

Size Buckets# of CRLs
0.5 KB174
0.5 KB to 0.625 KB264
0.625 KB to 0.75 KB246
0.75 KB to 1 KB310
1 KB to 2 KB366
2 KB to 4 KB237
4 KB to 8 KB232
8 KB to 32 KB500
32 KB to 64 KB297
64 KB to 128 KB218
128 KB to 1 MB106
1 MB to 8 MB33
8 MB to 128 MB9

I figured that most CRLs would be tiny, and we'd have a handful of outliers. Indeed, 50% of the CRLs are less than 4 Kbytes, and 75% are less than 32 Kbytes:
Cumulative Distribution of CRL size

On the top end, however, are 9 CRLs larger than 8 MB:

URLSize
http://www.sk.ee/repository/crls/esteid2011.crl66.57 MB
http://crl.godaddy.com/repository/mastergodaddy2issuing.crl36.22 MB
http://crl.eid.belgium.be/eidc201208.crl16.03 MB
http://crl.eid.belgium.be/eidc201204.crl10.84 MB
http://crl.eid.belgium.be/eidc201207.crl10.82 MB
http://crl.eid.belgium.be/eidc201202.crl10.67 MB
http://crl.eid.belgium.be/eidc201203.crl10.66 MB
http://crl.eid.belgium.be/eidc201201.crl10.47 MB

Remember, these are part of the WebPKI, not some private hierarchy.[3] For a convenient example of why browsers don't download CRLs when connecting somewhere, just point to these.

Download Latency

Latency matters. I'm on a pretty fast Internet connection, but even so, some of the CRLs that were even reasonable sizes took a while to download. I won't harp on this, but just a quick histogram:

Histogram of CRLs bucketed by download time

CRLs that took longer than 1 second to download on a really fast Internet connection -- 142 of them, or 4.7% -- are clear reasons for users' software to not check them for live revocation status.

Conclusions (such as there are any)

CRLs are not an exciting technology, but they're still used by the Web PKI. Since they're not exciting, it appears that some CAs believe they don't even need to keep their CRLs online; I mean, who checks these things, anyway?

Oh, yeah, me...

Still, with technologies such as CRLSets depending on CRLs as a means for revocation data, they clearly still have a purpose. It's not particularly convenient to make a habit of crawling OCSP responders to figure out the state of revocations on the Web.

Footnotes

[1] Note, that's not found by the Python script ; you'll need to grep the log for "URL transformed to HTTPS due to an HSTS policy"

[2] I admit that the buckets are a bit arbitrary, but here's what it looks like without some manual massaging:
Auto-generated buckets

[3] Most of these are not realistically going to be reached by browsers, however. The largest contains revocations that appear to belong to a government's national ID card list. GoDaddy's is a master list, but is only referred to by a revoked cert [crt.sh link].

Jared WeinPhoton Engineering Newsletter #13

This week I’m taking over for Dolske as he takes a vacation to view the eclipse. This is issue #13 of the Photon Engineering Newsletter.

This past week the Nightly team has had some fun with the Firefox icon. We’ve seen the following icons grace Nightly builds in the past week:

The icon in the top-left was created in 2011 by Sean Martell. The icon in the top-right was the original Phoenix icon. Phoenix was later renamed Firebird, and then the name was later changed to Firefox. The icon in the bottom left was the first “Firefox” icon, designed by Steven Garrity in 2003. The icon in the bottom-right, well it is such logo with much browser, we couldn’t help ourselves to not share it.

Recent Changes

Menus/structure:

The Report Site Issue button has been moved to the Page Action menu in Nightly and Dev Edition. This button doesn’t ship to users on Beta or Release.

2017-08-18_1554

Probably the biggest visual change this week is that we now have spacers in the toolbar. These help to separate the location bar from the other utility buttons, and also keep the location bar relatively centered within the window. We have also replaced the bookmarks menu button with the Library button (it’s the icon that looks like books on a shelf).

2017-08-18_1557

We also widened various panels to help fit more text in them.

Animation:

The Pin to Overflow animation has also been tweaked to not move as far. This will likely be the final adjustment to this animation (seen on the left). The Pocket button has moved to the location bar and the button expands when a page is saved to Pocket (seen on the right).

Preferences:

Preferences has continued to work towards their own visual redesign for Firefox 57. New icons were landed for the various categories within Preferences, and some borders and margins have been adjusted.

Visual redesign:

The tab label is no longer centered on Mac. This now brings Linux, Mac, and Windows to all have the same visual treatment for tabs.

Changing to Compact density within Customize mode changes the toolbar buttons to now use less horizontal space. The following GIF shows the theme changing from Compact to Normal to Touch densities.

density

Onboarding:

New graphics for the onboarding tour have landed.

Performance:

Two of the main engineers focusing on Performance were on PTO this past week so we don’t have an update from them.


Tagged: firefox, photon, planet-mozilla

Mitchell BakerResignation as co-chair of the Digital Economy Board of Advisors

For the past year and a half I have been serving as one of two co-chairs of the U.S. Commerce Department Digital Economy Board of Advisors. The Board was appointed in March 2016 by then-Secretary of Commerce Penny Pritzer to serve a two year term. On Thursday I sent the letter below to Secretary Ross.

Dear Secretary Ross,
I am resigning from my position as a member and co-chair of the Commerce Department’s Digital Economy Board of Advisors, effective immediately.
It is the responsibility of leaders to take action and lift up each and every American. Our leaders must unequivocally denounce bigotry, racism, sexism, hate, and violence.
The digital economy is fundamental to creating an economy that offers opportunity to all Americans. It has been an honor to serve as member and co-chair of this board and to work with the Commerce Department staff.
Sincerely,
Mitchell Baker
Executive Chairwoman
Mozilla

Air MozillaWebdev Beer and Tell: August 2017, 18 Aug 2017

Webdev Beer and Tell: August 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Air MozillaWebdev Beer and Tell: August 2017, 18 Aug 2017

Webdev Beer and Tell: August 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

David TellerJavaScript Binary AST Engineering Newsletter #1

Hey, all cool kids have exciting Engineering Newsletters these days, so it’s high time the JavaScript Binary AST got one! Summary JavaScript Binary AST is a joint project between Mozilla and Facebook to rethink how JavaScript source code is stored/transmitted/parsed. We expect that this project will help visibly speed up the loading of large codebases of JS applications and will have a large impact on the JS development community, including both web developers, Node developers, add-on developers and ourselves.

Ehsan AkhgariQuantum Flow Engineering Newsletter #20

It is hard to believe that we’ve gotten to the twentieth of these newsletters.  That also means that we’re very quickly approaching the finish line for this sprint.  We only have a bit more than five more weeks to go before Firefox 57 merges to beta.  It may be a good time to start to think more carefully about what we pay attention to in the remaining time, both in terms of the risk of patches landing, and the opportunity cost of what we decide to put off until 58 and the releases after.

We still have a large number of triaged bugs that are available for someone to pick up and work on.  If you have some spare cycles, we really would appreciate if you consider picking one or two bugs from this list and working on them.  They span many different areas of the codebase so finding something in your area of interest and expertise should hopefully be simple.  Quantum Flow isn’t the kind of project that requires fixing every single one of these bugs to be finished successfully, but at the same time big performance improvements often consist of many small parts, so the cumulative impact of a few additional fixes can make a big impact.

It is worth mentioning that lately while lurking on various tech news and blog sites where Nightly users comment, I have seen quite a few positive comments about Nightly performance from users.  It’s easy to get lost in the details of the work involved in getting rid of synchronous IPCs, synchronous layout/style flushes, unnecessary memory allocations, hashtable lookups, improving data locality, JavaScript JIT performance, making sure code gets inlined better, ship a new CSS engine, etc. etc. but it is reassuring to see people take notice🙂

Moving on to mention one point about Speedometer charts on AWFY which I have gotten a few questions about recently.  We now have Speedometer benchmark numbers on Firefox Beta on the reference hardware reported in addition to inbound optimized and PGO builds.  You may notice that the benchmark score numbers we are getting on Beta are around the same as Nightly (which swings around 83-84 these days).  This doesn’t mean that we haven’t made any improvements on Nightly since the last Beta merge!  We have some Nightly only telemetry code and some features that are only enabled on the Nightly channel, and those add a bit of overhead, which causes us to see a bit of an improvement after an uplift from mozilla-central to mozilla-beta without any code changes.  This means that when the current code on Nightly gets merged to Beta 57, we should expect a bit of an improvement similarly.

And now let me take a moment to acknowledge the work of some of those who helped make Firefox faster last week.  I hope I’m not dropping anyone’s name mistakenly.

Karl DubostAbout Publishing Code Benchmarks

We often see code benchmarks. Some browser X HTML renderer is faster than browser Y renderer. Some JavaScript engine outperforms the competition by two folds.

While these benchmarks give a kind of instant gratification for the product, they always make me dubious coming from anyone. If the target is to outperform another browser, then I sense that nothing useful has really been accomplished. Even as a marketing technique, I don't think it's working.

When/if publishing a benchmark, focus on three things:

  • How this new code outperform the previous versions of the code? It's good to show that we care about our product and that we want to be faster where/when it matters.
  • How does this improve the user experience on some specific sites? Improving speed in controled environment like benchmarks is nice, but improving speed on real cases Web site is even better. Did it make the JavaScript-controled scrolling faster and smoother?
  • How did we get there? What are the steps which have been taken to improve the code performance? The coding tricks and techniques used to make it faster.

These will be benchmarks blog posts I like to read. So as a summary

Good benchmarks show 1. Outperform your own code 2. Real websites improvement demos 3. Give Technical explanations.

Otsukare!

Air MozillaIntern Presentations: Round 5: Thursday, August 17th

Intern Presentations: Round 5: Thursday, August 17th Intern Presentations 7 presenters Time: 1:00PM - 2:45PM (PDT) - each presenter will start every 15 minutes 3 SF, 1 TOR, 1 PDX, 2 Paris

The Firefox FrontierThe Lightweight Browser: Firefox Focus Does Less, Which Is So Much More

Firefox had a baby and named it Focus! Firefox Focus is the new private browser for iOS and Android, made for those times when you just need something simple and … Read more

The post The Lightweight Browser: Firefox Focus Does Less, Which Is So Much More appeared first on The Firefox Frontier.

Emma IrwinI Need Your Open Source Brain

Photo credit: Internet Archive Book Images via Visual Hunt / No known copyright restrictions

Together with help from leaders in Teaching Open Source(TOS), POSSE and others, I’m developing a series of learning modules intended to help Computer Science / Technical Students gain a holistic understanding of open source, with goals for build-in opportunities to ‘learn by doing’.  These modules are intended to enable students in their goals as they build Open Source Clubs(new website coming soon) on their campuses.

And I need your help!

I need your brain, what it knows about Open Source, what skills, knowledge, attitudes, visions you think are important and crucial. I also need your brain to ..um brainstorm(!) ideas for real world value in open source ‘open educational’ offerings.

There’s a Github task for that!

Did I mention I need your Open Source Brain? I really do…

You’ll find checklists for review at the bottom of each

FacebookTwitterGoogle+Share

Mozilla VR BlogSamsung Gear VR support lands in Servo

Samsung Gear VR support lands in Servo

Samsung Gear VR support lands in Servo

We are happy to announce that Samsung Gear VR headset support is landing in Servo. The current implementation is WebVR 1.1 spec-compliant and supports both the remote and headset controllers available in the Samsung Gear VR 2017 model.

If you are eager to explore, you can download a project template compatible with Gear VR Android phones. Add your Oculus signature file, and run the project to launch the application on your mobile phone.

Alongside the Gear VR support, we worked on other Servo areas in order to provide A-Frame compatibility, WebGL extensions, optimized Android compilations and reduced Servo startup times.

A-Frame Compatibility

Servo now supports Mutation Observers that enables us to polyfill Custom Elements. Together with a solid WebVR architecture and better texture loading we can now run any A-Frame content across mobile (Google Daydream, Samsung Gear VR) and desktop (HTC Vive) platforms. All the pieces have fallen into place thanks to all the amazing work that the Servo team is doing.

WebGL Extensions

Samsung Gear VR support lands in Servo

WebGL Extensions enable applications to get optimal performance by taking advantage of state-of-the-art GPU capabilities. This is even more important in VR because of the extra work required for stereo rendering. We designed the WebGL extension architecture and implemented some of the extensions used by A-Frame/Three.js such as float textures, instancing, compressed textures and VAOs.

Compiling Servo for Android

Recently, the Rust team changed the default Android compilations targets. They added an armv7-linux-androideabi target corresponding to the armeabi-v7a official ABI and changed the arm-linux-androideabi to correspond to the armeabi official ABI instead of armeabi-v7a.

This could cause important performance regressions on Servo because it was using the arm-linux-androideabi target by default. Using the new armv7 compilation target is easy for pure Rust based crates. It’s not so trivial for cmake or makefile based dependencies because they infer the toolchain and compiler names based on the target name triple.

We adapted all the problematic dependencies. We took advantage of this work to add arm64 compilation support and provided a simple CLI API to select any Android compilation target in Servo.

Reduced startup times

C based libfontconfig library was causing long startup times in Servo for Android. We didn’t find a way to fix the library itself so we opted to get rid of it and implement an alternative way to query Android system fonts. Unfortunately, Android doesn't provide an API to query system fonts until Android O so we were forced to parse the system configuration files and load fonts manually.

Gear VR support on Rust-WebVR Library

Samsung Gear VR support lands in Servo

We started working on ovr-mobile-sys, the Rust bindings crate for the Oculus Mobile SDK API. We used rust-bindgen to automatically generate the bindings from the C headers but had to manually transpile some of the inline SDK header code since inline functions don’t generate symbols and are not exported by rust-bindgen.

Then we added the SDK integration into the rust-webvr standalone library. The OculusVRService class offers the entry point to access Oculus SDK and handles life-cycle operations such as initialization, shutdown, and VR device discovery. The integration with the headset is implemented in OculusVRDisplay. Gear VR lacks positional tracking, but by using the neck model provided in the SDK, we expose a basic position vector simulating how the human head naturally rotates relative to the base of the neck.

In order to read Gear VR sensor inputs and submit frames to the headset, the Android activity must enter VR Mode by calling vrapi_EnterVrMode() function. Oculus Mobile SDK requires a precise life cycle management and handling some events that may interleave in complex ways. For a correct implementation the Android Activity must enter VR mode in a surfaceChanged() or onResume() event, whichever comes last. And it must leave VR mode in a surfaceDestroyed() or onPause() event, whichever comes first.

In a Glutin based Android NativeActivity, life cycle events are notified using Rust channels. This caused synchronization problems due to non-deterministic event handling in multithreading. We couldn’t guarantee that the vrapi_LeaveVrMode() function was called before NativeActivity’s EGLSurface was destroyed and the app went to background. Additionally, we needed to block the event notifier thread until Gear VR resources are freed, in a different renderer thread, to prevent collisions (e.g. Glutin dropping the EGLSurface at the same time that VR renderer thread was leaving VR mode). We contributed a deterministic event handling implementation to the Rust-android-glue.

Oculus mobile SDK allows to directly send a WebGL context texture to the headset. Despite that, we opted for a triple buffered swap chain recommended in the SDK to avoid potential flickering and performance problems when using the same texture every frame. As we did with the Daydream implementation, we render the VR-ready texture to the current ovrTextureSwapChain using a BlitFramebuffer-based solution, instead of rendering a quad, to avoid implementing the required OpenGL state-change safeguards or context switching.

Oculus Mobile SDK allowed us to directly attach the NativeActivity’s surface to the Gear VR time warp renderer. We were able to run the pure Rust room-scale demo without writing a line of Java. It’s nice that the SDK allows to achieve a java-free integration, but our luck changed when we integrated all this work into a full browser architecture.

Gear VR integration into Servo

Our Daydream integration worked inside Servo almost on a first try after it landed on the rust-webvr standalone library. This was not the case with the Gear VR integration…

First, we had to research and fix up to four specific GPU driver issues with the Mali-T880 GPU used in the Samsung Galaxy S7 phone:

As a result, we were able to see WebGL stereo rendering on the screen but entering VR mode crashed with a JNI assertion failure inside the Oculus VR SDK. This was caused because inside the browser context different threads are used for the rendering and VR device initialization/discovery. This requires the use of different Oculus ovrJava instances for each thread.

The assertion failure was gone but we couldn’t see anything on the screen after calling vrapi_EnterVrMode(). The logcat error messages triggered by the Oculus SDK helped to find the cause of the problem. The Gear VR time warp implementation hijacks the explicitly passed Android window surface pointer. We could use the NativeActivity’s window surface in the standalone room-scale demo. In a full browser architecture, however, there is a fight to take over ownership of the Android surface between time warp thread and the browser compositor. We discarded the idea of directly using the NativeActivity’s window surface and decided to switch to a Java SurfaceView VR backend in order to make both the browser’s compositor and Gear VR’s time warp thread happy.

By this means, the VR mode life cycle fit nicely in the browser architecture. There was one final surprise though. The activity entered VR mode correctly, there were no errors in the logcat, time warp thread was showing correct render stats and the headset pose data was correctly fetched. Nevertheless, the VR scene with lens distortion was not yet visible in the Android view hierarchy. This led to a new instance of spending some hours of debugging to change a single line of code. The Android SurfaceView was being rendered correctly but it was composited below the NativeActivity’s browser window because setZOrderOnTop() is not enabled by default on Android:

After this change everything worked flawlessly and it was time to enjoy running some WebVR experiences on the Gear VR ;)

Conclusion

It's been a lot of fun seeing Gear VR support land in Servo and being able to run A-Frame demos in it. We continue to work hard on squeezing WebGL and WebVR performance and expect to land some nice optimizations soon. We are also working on implementing unique WebVR features that no other browser has yet. More news soon ;) Stay tuned!

Air MozillaWeekly SUMO Community Meeting August 16, 2017

Weekly SUMO Community Meeting August 16, 2017 This is the sumo weekly call

Firefox NightlyThese Weeks in Firefox: Issue 22

Highlights

  • The main toolbar now has 2 flexible spaces, one on either side of the url/search bar(s). The library button has also replaced the bookmarks menu button in the default toolbar set.

Friends of the Firefox team

  • Resolved bugs (excluding employees): https://mzl.la/2x0m5n4
    • More than one bug fixed:
      • Alejandro Rodriguez Salamanca
      • Dan Banner
      • Hossain Al Ikram [:ikram] (QA Contact)
      • Masatoshi Kimura [:emk]
      • Michael Kohler [:mkohler]
      • Michael Smith [:mismith]
      • Richard Marti (:Paenglab)
      • Rob Wu [:robwu]
      • Tomislav Jovanovic :zombie
      • flyingrub
    • New contributors (🌟 = First Patch!)

Project Updates

Add-ons

Activity Stream

  • Landed pref’ed off in 56 Beta, with localization, snippets, performance telemetry, and Pocket recommendations.
  • Up next
    • Adding “Recent Bookmarks” and “Recently Visited” to Highlights.
    • Adding custom sections via a Web Extension.
    • More customization for Top Sites: Pin/Dismiss, Show More/Less, Add/Edit Top Site.
    • Creating a site summary pipeline (high-res page icons -> Tippytop -> Screenshot + Favicon).
    • Optimizing metadata queries and Tippytop Icon DB improvements.

Firefox Core Engineering

  • Installer
    • Profile cleanup option has landed in the stub installer for 57. Users who are running the stub installer and have an older version of Firefox installed will be presented with the option to clean up their profile.
  • Updater
    • LZMA/SHA384 changes have landed as of 56 beta 3.
  • Quantum & Photon Performance pile-on:
    • Felipe Gomes, Kirk Steuber, Adam Gashlin, Perry Jiang, Doug Thayer, Robert Strong closed 16 bugs and are currently on 11 more bugs.

Form Autofill

Photon

Structure
Animation
Visuals
Preferences

Privacy/Security

Sync / Firefox Accounts

  • We’re wrapping up iOS bidirectional sync work!
  • Form Autofill Address sync is now enabled on Nightly. Enable it in about:preferences#sync

Test Pilot

  • All Test Pilot experiments are off the Add-on SDK now!
  • All Test Pilot add-ons are getting signed through a new signing pipeline (not AMO) to allow for non-WebExtensions in the future.
  • Planning to roll out Screenshots to Release in the next couple of weeks.

Web Payments

Christian HeilmannTaking a break – and so should you

TL;DR: I am going on holiday for a week and don’t take any computer with me. When I’m back I will cut down on my travels, social media and conference participation and focus more on coaching others, writing and developing with a real production focus.

Sleeping dogLarry shows how it is done

You won’t hear much from me in the next week or so as I am taking a well-deserved vacation. I’m off to take my partner to the Cayman Islands to visit friends who have a house with a spare room as hotels started to feel like work for me. I’m also making the conscious decision to not take any computer with me as I will be tempted to do work whilst I am there. Which would be silly.

Having just been in a lot of meetings with other DevRel people and a great event about it I found a pattern: we all have no idea how to measure our success and feel oddly unsatisfied if not worried about this. And we are all worried about keeping up to do date in a daily changing market.

I’m doing OK on both of these, but I also suffer from the same worries. Furthermore, I am disturbed about the gap between what we talk about at events and workshops and what gets released in the market afterwards.

The huge gap between publication and application

We have all the information what not to do to create engaging, fast and reliable solutions. We have all the information how to even automate some of these to not disrupt fast development processes. And yet I feel a massive lack of longevity or maintainability in all the products I see and use. I even see a really disturbing re-emergence of “this only needs to work on browser $x and platform $y” thinking. As if the last decade hadn’t happened. Business decisions dictate what goes into production, less so what we get excited about.

Even more worrying is security. We use a lot of third party code, give it full access to machines and fail to keep it up-to-date. We also happily use new and untested code in production even when the original developers state categorically that it shouldn’t be used in that manner.

When it comes to following the tech news I see us tumbling in loops. Where in the past there was a monthly cadence of interesting things to come out, more readily available publication channels and a “stream of news” mentality makes it a full-time job just to keep up with what’s happening.

Many thoughtpieces show up in several newsletters and get repurposed even if the original authors admitted in commentary that they were wrong. A lot is about being new and fast, not about being right.

There is also a weird premature productisation happening. When JavaScript, Browsers and the web weren’t as ubiquitous as they are now, we showed and explained coding tricks and workarounds in blog posts. Now we find a solution, wrap it in a package or a library and release it for people to use. This is a natural progression in any software, but I miss the re-use and mulling around of the original thought. And I am also pretty sure that the usage numbers and stars on GitHub are pretty inflated.

My new (old) work modus

Instead of speaking at a high amount of conferences, I will be much pickier with where I go. My time is more limited now, and I want to use my talents to have a more direct impact. This is due to a few reasons:

  • I want to be able to measure more directly what I do – it is a good feeling to be told that you were inspiring and great. But it fails to stay a good feeling when you don’t directly see something coming out of it. That’s why instead of going from event to event I will spend more time developing tools and working directly with people who build products.
  • I joined a new team that is much more data driven – our job is to ensure people can build great apps and help them by fixing our platform and help them apply best practices instead of just hearing about them. This is exciting – I will be able to see just how applicable what we talk about really is and collect data of its impact. Just like any good trainer should ensure that the course attendees really learned what you talked about this is a full feedback loop for cool technologies like ServiceWorker and Push Nofifications.
  • We just hired a truckload of talented people to coach – and I do want to see other people on stage than the usual suspects. It is great to see people grow with help you can give.
  • I just had a cancer growth removed from my face – it was benign but it is kind of a wake-up call to take more care about myself and have my body looked after better on an ongoing basis
  • I am moving to Berlin to exclusively live there with my partner and our dog – I’ve lived out of suitcases for years now and while this is great it is fun to have a proper home with people you care about to look after. I will very much miss London, but I am done with the politics there and I don’t want to maintain two places any longer.
  • I will spend more time coding – I am taking over some of the work on PWAbuilder and other helper tools and try them out directly with partners. Working in the open is great, but there is a huge difference between what Twitter wants and what people really need
  • I will write more – both articles and blog posts. I will also have a massive stab at refreshing the Developer Evangelism Handbook
  • I will work more with my employer and its partners – there is a huge group of gifted, but very busy developers out there that would love to use more state-of-the-art technology but have no time to try it out or to go to conferences.

Anke, Larry and ChrisGreetings from Berlin

What this means for events and meetups

Simple.

  • I will attend less – instead I will connect conferences and meetups with other people who are not as in demand but great at what they do. I am also helping and mentoring people inside and outside the company to be invited instead of me. A lot of times a recommendation is all that is needed. And a helping hand in getting over the fear of “not being good enough”.
  • I will stay shorter – I want to still give keynotes and will consider more workshops. But I won’t be booking conferences back-to-back and will not take part in a lot of the social activities. Unless my partner is also coming along. Even better when the dog is allowed, too.
  • I am offering to help others – to review their work to get picked and help conference organisers to pick new, more diverse, talent.

I have a lot of friends who do events and I will keep supporting those I know have their full heart in them. I will also try to be supportive for others that need a boost for their new event. But I think it is a good time to help others step up. As my colleague Charles Morris just said at DevRelConf, “not all conferences need a Chris Heilmann”. It is easy to get overly excited about the demand you create. But it is as important to not let it take over your life.

Christian HeilmannDevRelSummit was well worth it

Last week I was in Seattle to attend a few meetings and I was lucky to attend DevRelSummit in the Galvanize space. I was invited to cover an “Ask me anything” slot about Developer Outreach in Microsoft and help out Charles Morris of the Edge team who gave a presentation a similar matter.

It feels weird to have a conference that is pretty meta about the subject of Developer relations (and there is even a ConfConf for conference organisers), but I can wholeheartedly recommend DevRelSummit for people who already work in this field and those who want to.

The line-up and presentations were full of people who know their job and shared real information from the trenches instead of advertising products to help you. This is a very common worry when a new field in our job market gains traction. Anyone who runs events or outreach programs drowns in daily offers of “the turn-key solution to devrel success” or similar snake oil.

In short, the presentations were:

  • Bear Douglas of Slack (formerly Twitter and Facebook) sharing wins and fails of developer outreach
  • Charles Morris of Microsoft showing how he scaled from 3 people on the Edge team to a whole group, aligning engineering and outreach
  • Kyle Paul showing how to grow a community in spaces that are not technical cool spots and how to measure DevFest success
  • AJ Glasser of Unity explaining how to deal with and harvest feedback you get showing some traps to avoid
  • Damon Hernandez of Samsung talking about building community around hackathons
  • Linda Xie of Sourcegraph showing the product and growth cycle of a new software product
  • Robert Nyman of Google showing how he got into DevRel and what can be done to stay safe and sound on the road
  • Angel Banks and Beth Laing sharing the road to and the way to deliver an inclusive conference with their “We Rise” event as the example
  • Jessica Tremblay and Sam Richard showing how IBM scaled their developer community

In between the presentations there were breakout discussions, lightning talks and general space and time to network and share information.

As expected, the huge topics of the event were increasing diversity, running events smoothly, scaling developer outreach and measuring devrel success. Also, as expected, there were dozens of ways and ideas how to do these things with consensus and agreeable discourse.

All in all, DevRelSummit was a very well executed event and a superb networking opportunity without any commercial overhead. There was a significant lack of grandstanding and it was exciting to have a clear and open information exchange amongst people who should be in competition but know that when it comes to building communities, this is not helpful. There is a finite amount of people we want to reach doing Developer Relations. There is no point in trying to subdivide this group even further.

I want to thank everyone involved about the flawless execution and the willingness to share. Having a invite-only slack group with pre-set channels for each talk and session was incredibly helpful and means the conversations are going on right now.

Slack Channel of the event

DevRelSummit showed that when you get a dedicated group of people together who know their jobs and are willing to share that you can get an event to be highly educational without any of the drama that plights other events. We have a lot of problems to solve and many of them are very human issues. A common consensus of the event was that we have to deal with humans and relate to them. Numbers and products are good and useful, but not burning out or burning bridges even with the best of intentions are even more important.

Air MozillaIntern Presentations: Round 4: Tuesday, August 15th

Intern Presentations: Round 4: Tuesday, August 15th Intern Presentations 6 presenters Time: 1:00PM - 2:30PM (PDT) - each presenter will start every 15 minutes 5 MTV, 1 Berlin

Hacks.Mozilla.OrgEssential WebVR resources

The general release of Firefox 55 brought a number of cool new features to the Gecko platform, one of which is the WebVR API v1.1. This allows developers to create immersive VR experiences inside web apps, compatible with popular hardware such as HTC VIVE, Oculus Rift, and Google Daydream. This article looks at the resources we’ve made available to facilitate getting into WebVR development.

Support notes

Version 1.1 of the WebVR API is very new, with varying support available across modern browsers:

  • Firefox 55 sees full support on Windows, and more experimental support available for Mac in the Beta/Nightly release channels only, until testing and final work is completed. Supported VR hardware includes HTC VIVE, Oculus Rift, and Google Daydream.
  • Chrome support is still experimental — you can currently only see support out in the wild on Chrome for Android with Google Daydream.
  • Edge fully supports WebVR 1.1, through the Windows Mixed Reality headset.
  • Support is also available in Samsung Internet, via their GearVR hardware.

Note that the 1.0 version of the API can be considered obsolete, and has been (or will be) removed from all major browsers.

Controlling WebVR apps using the full features of VR controllers relies on the Gamepad Extensions API. This adds features to the Gamepad API that provide access to controller features like haptic actuators (e.g. vibration hardware) and position/orientation data (i.e., pose). This currently has even more limited support than the WebVR API; Firefox 55+ has it available in Beta/Nightly channels.

In other browsers, you’ll have to make do for now with basic Gamepad API functionality, like reporting button presses.

vr.mozilla.org

vr.mozilla.org — Mozilla’s new landing pad for WebVR — features demos, utilities, news and updates, and all the other information you’ll need to get up and running with WebVR.

MDN documentation

MDN has full documentation available for both the APIs mentioned above. See:

In addition, we’ve written some useful guides to get you familiar with the basics of using these APIs:

A-Frame and other libraries

WebVR experiences can be fairly complex to develop. The API itself is easy to use, but you need to use WebGL to create the 3D scenes you want to feature in your apps, and this can prove difficult to those not well-versed in low-level graphics programming. However, there are a number of libraries to hand that can help with this.

The hero of the WebVR world is Mozilla’s A-Frame library, which allows you to create nice looking 3D scenes using custom HTML elements, handling all the WebGL for you behind the scenes. A-Frame apps are also WebVR-compatible by default. It is perfect for putting together apps and experiences quickly.

There are a number of other well-written 3D libraries available too, which abstract away the difficulty of working with raw WebGL. Good examples include:

These don’t include VR capabilities out of the box, but it is not too difficult to write your own WebVR rendering code around them.

If you are worried about supporting older browsers that only include WebVR 1.0 (or no VR) as well as newer browsers with 1.1, you’ll be pleased to know that there is a WebVR polyfill available.

Demos and examples

See also

Mozilla Reps CommunityReps Program Objectives – Q3 2017

As with every quarter, we define Objectives and Key Results for the Reps Program. We are happy to announce the Objectives for the current quarter.

Objective 1: The Reps program continues to grow its process maturity
KR1: 20 Reps have been trained with the Resource training
KR2: 100% of the budget requests of new Reps are filed by Resource Track Reps
KR3: 30 Reps complete the coaching training
KR4: The amount of mentor-less Reps is reduced by 50%
KR5: Increase number of authors for Reps tweets to 10 people

Objective 2: The Reps program is the backbone for any mobilizing needs
KR1: We documented what mobilizing Reps are focusing on
KR2: An implementation roadmap for mobilizers’ recommendations is in place.
KR3: Identified 1 key measures that is defining how our Mobilizers add value to the coding and Non-Coding/Enthusiast communities

Objective 3: The Activate Portal is improved for Mobilizer Reps and Functional Areas
KR1: The Rust activity is updated
KR2: The WebExtensions activity update has been tested in 3 pilot events in 3 different countries
KR3: 60 unique Reps have run a MozActivate event
KR4: The website is updated to the new branding

We will work closely with the Community Development Team to achieve our goals. You can follow the progress of these tasks in the Reps Issue Tracker. We also have a dashboard to track the status of each objective.

Which of the above objectives are you most interested in? What key result would you like to hear more about? What do you find intriguing? Which thoughts cross your mind upon reading this? Where would you like to help out? Let’s keep the conversation going! Join the discussion on Discourse.

Nick CameronThese Weeks in Dev-tools #1

2017-08-14

Welcome to the first ever issue of 'These Weeks in Dev-Tools'! The dev-tools team is responsible for developer tools for Rust developers. That means any tools a developer might use (or want to use) when reading, writing, or debugging Rust code, such as Rustdoc, IDEs, editors, Racer, bindgen, Clippy, Rustfmt, etc.

These Weeks in Dev-Tools will keep you up to date with all the exciting news in this area. We plan to have a new issue every few weeks. If you have any news you'd like us to report, please comment on the tracking issue.

If you're interested in Rust's developer tools and want to contribute or ask questions, come chat to us in #rust-dev-tools.

Releases

RFCs

Thanks!

  • @photoszzt has been re-writing various ad-hoc computations into fix-point analyses in Bindgen:
    • whether we can add derive(Debug) to a struct: rust-lang-nursery/rust-bindgen#824
    • and whether a struct has a virtual table: rust-lang-nursery/rust-bindgen#850
  • @topecongiro for doing sustained, impressive work on Rustfmt - implementing the new RFC style, fixing (literally) hundreds of bugs, and lots more.
  • Shout out to @TedDriggs for continuing to push Racer forward. Jwilm and the rest of Racer's users continue to appreciate all your hard work!

Meetings

We've had a bunch of meetings. You can find all the minutes here. Some that might be interesting:

Mozilla Open Policy & Advocacy BlogBringing the 4th Amendment into the Digital Age

Today, Mozilla has joined other major technology companies in filing an amicus brief urging the Supreme Court of the United States to reexamine how the 4th Amendment and search warrant requirements should apply in our digital era. We are joining this brief because we believe our laws need to keep up with what we already know to be true: that the Internet is an integral part of modern life, and that user privacy must not be treated as optional.

At the heart of this case is the government’s attempt to obtain “cell site location information” to aid in a criminal investigation. This information is generated continuously when your phone is on. Your phone communicates with nearby cell sites to connect with the cellular network and those sites create a record of your phone’s location as you go about your business. In the case at hand, the government did not obtain a warrant, which would have required probable cause, before obtaining this location information. Instead, the government sought a court order under the Stored Communications Act of 1986, which requires a lesser showing.

Looking at how the courts have dealt with the cell phone location records in this case demonstrates why our laws must be revisited to account for modern technological reality. The district court decided that the government didn’t have to obtain a warrant because people do not have a reasonable expectation of privacy in their cell phone location information. On appeal, the Sixth Circuit acknowledged that similar information, such as GPS monitoring in government investigations, would require a warrant. But it too found no warrant was needed because the location information was a “business record” from a “third party” (i.e., the service providers).

We believe users should not be forced to surrender their expectations of privacy when using their phones and we hope the Court will reconsider the law in this area.

*Brief link updated on August 16

The post Bringing the 4th Amendment into the Digital Age appeared first on Open Policy & Advocacy.

This Week In RustThis Week in Rust 195

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is exa, a modern ls replacement (with a tree thrown in as well) written in Rust. Thanks to Vikrant for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

128 pull requests were merged in the last week

New Contributors

  • Alexey Tarasov
  • arshiamufti
  • Foucher
  • Justin Browne
  • Natalie Boehm
  • nicole mazzuca
  • Owen Sanchez
  • Ryan Leckey
  • Tej Chajed
  • Thomas Levy

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

The RFC style is now the default style in Rustfmt - try it out and let us know what you think!

Currently being discussed:

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

once you can walk barefoot (C), it’s easy to learn to walk with shoes (go) but it will take time to learn to ride a bike (rust)

/u/freakhill on Reddit.

Thanks to Rushmore for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Mozilla Marketing Engineering & Ops BlogMozMEAO SRE Status Report - August 15, 2017

Here’s what happened on the MozMEAO SRE team from August 8th - August 15th.

Current work

MDN Migration to AWS

  • We’ve setup a few cronjobs to periodically sync static files from the current SCL3 datacenter to an S3 bucket. Our Kubernetes development environment runs a cronjobs that pulls these files from S3 to a local EFS mount.
    • There was some additional work needed to deal with files in SCL3 that contained unicode characters in their names.
  • A cronjob in Kubernetes has been implemented to backup new files uploaded to our shared EFS volume.

  • We’ve finished our evaluation of hosted Elasticsearch from elastic.co, which we’ll be using for our initial migration in production.

Upcoming Portland Deis 1 cluster decommissioning

The Deis 1 cluster in Portland is tentatively scheduled to be decommissioned later this week.

Links

Mozilla Addons BlogAdd-ons Update – 2017/08

Here’s the monthly update of the state of the add-ons world.

The Review Queues

In the past month, our team reviewed 1,803 listed add-on submissions:

  • 1368 in fewer than 5 days (76%).
  • 147 between 5 and 10 days (8%).
  • 288 after more than 10 days (16%).

274 listed add-ons are awaiting review.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Visit our wiki page for more information.

Compatibility Update

We published the blog post for 56 and the bulk validation has been run. This is the last one of these we’ll do, since compatibility is a much smaller problem with the WebExtensions API.

Firefox 57 is now on the Nightly channel, and only accepting WebExtension add-ons by default. Here are some changes we’re implementing on AMO to ease the transition to 57.

We recommend that you test your add-ons on Beta. If you’re an add-ons user, you can install the Add-on Compatibility Reporter. It helps you identify and report any add-ons that aren’t working anymore.

Recognition

We would like to thank the following people for their recent contributions to the add-ons world:

  • Apoorva Pandey
  • Neha Tekriwal
  • Swapnesh Kumar Sahoo
  • rctgamer3
  • Tushar Saini
  • vishal-chitnis
  • Cameron Kaiser
  • zombie
  • Trishul Goel
  • Krzysztof Modras
  • Tushar Saini
  • Tim Nguyen
  • Richard Marti
  • Christophe Villeneuve
  • Jan Henning
  • Leni Mutungi
  • dw-dev
  • Dino Herbert

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/08 appeared first on Mozilla Add-ons Blog.

The Firefox Frontier64-bit Firefox is the new default on 64-bit Windows

Users on 64-bit Windows who download Firefox will now get our 64-bit version by default. That means they’ll install a more secure version of Firefox, one that also crashes a … Read more

The post 64-bit Firefox is the new default on 64-bit Windows appeared first on The Firefox Frontier.

Air MozillaMozilla Weekly Project Meeting, 14 Aug 2017

Mozilla Weekly Project Meeting The Monday Project Meeting

Hacks.Mozilla.OrgA-Frame comes to js13kGames: build a game in WebVR

It’s that time of the year again – the latest edition of the js13kGames competition opened yesterday, on Sunday, August 13th. Just like last year, and going back to 2012 when I started this competition. Every year the contest has a new theme, but his time there’s another new twist that’s a little bit different – a brand new A-Frame VR category just in time for the arrival of WebVR to Firefox 55 and a desktop browser near you.

Js13kGames is an online competition for HTML5 game developers where the fun part is that the size limit is set to 13 kilobytes. Unlike a 48-hour game jam, you have a whole month to come up with your best idea, create it, polish as much as you can, and submit – deadline is September 13th.

A brief history of js13kgames

It started five years ago from the pure need of having a competition for JavaScript game developers like me – I couldn’t find anything interesting, so I created one myself. Somehow it was cool enough for people to participate, and from what I heard they really enjoyed it, so I kept it going over the years even though managing everything on my own is exhausting and time-consuming.

There have been many great games created since the beginning – you can check GitHub’s recent blog post for a quick recap of some of my personal favourites. Two of the best entries from 2016 ended up on Steam in their post-competition versions: Evil Glitch and Glitch Buster, and keys for both of them are available as prizes in the competition this year.

A-Frame category

The big news this year that I’m really proud of: Virtual Reality has arrived with the new A-Frame category. Be sure to check it out the A-Frame landing page for the rules and details. You can reference the minified version of the A-Frame library and you are not required to count its size as part of the 13 kilobytes size limit that defines this contest.

Since the A-Frame library itself was announced I have been really excited trying it out. I believe it’s a real game changer (pun intended) for the WebVR world. With just a few lines of HTML markup you can set up a simple scene with VR mode, controls, lights. Prototyping is extremely easy, and you can build really cool experiments within minutes. There are many useful components in the Registry that can help you out too, so you don’t have to write everything yourself. A-Frame is very powerful, yet so easy to use – I really can’t wait to see what you’ll come up with this year.

Resources

If WebVR is all brand new to you and you have no idea where to start, read Chris Mills’ recent article “WebVR Essentials”. Then be sure to check out the A-Frame website for useful docs and demos, and a lively community of WebVR creators:

I realize the 13K size limit is very constraining, but these limitations spawn creativity. There have been many cool and inspiring games created over the years, and all their source code is available on GitHub in a readable form for everyone to learn from. There are plenty of A-Frame tutorials out there, so feel free to look for the specific solutions to your ideas. I’m sure you’ll find something useful.

Feedback

Many developers who’ve participated in this competition in previous years have mentioned expert feedback as a key benefit from the competition. This year’s judges for the A-Frame category will focus their full attention to on WebVR games only, in order to be able to offer constructive feedback on your entry.

The A-Frame judges include: Fernando Serrano Garcia (WebVR and WebGL developer), Diego Marcos (A-Frame co creator, API designer and maintainer), Ada Rose Edwards (Senior Engineer and WebVR advocate at Samsung) and Matthew ‘Potch’ Claypotch (Developer Advocate at Mozilla).

Prizes

This year, we’ll be offering custom-made VR cardboards to all participants in the js13kGames competition. These will be shipped for every complete submission, along with the traditional annual t-shirt, and a bunch of cool stickers.

In addition to the physical package that’s shipped for free to your doorstep, there’s a whole bunch of digital prizes you can win – software licenses, engines, editors and other tools, as well as subscription plans for various services and online courses, games and game assets, ebooks, and vouchers.

Prizes for the A-Frame category include PlayCanvas licenses, WebVR video courses, and WebStorm licenses. There are other ways to win more prizes too: Community Awards and Social Specials. You can find all the details and rules about how to enter on the competition website.

A look back

I’m happy to see this competition become more and more popular. I’ve started many projects, and many have failed. Yet this one is still alive and kicking, even though HTML5 game deveopment itself is a niche, and the size constraint in this contest means you have to mind the size of every resource you want to use. It is indeed a tough competition and not every developer makes it to the finish, but the feeling of submitting an entry minutes before the deadline is priceless.

I’m a programmer, and my wife Ewa is a graphic designer on all our projects, including js13kGames. I guess that makes Enclave Games a family business! With our little baby daughter Kasia born last year, it’s an ongoing challenge to balance work, family and game development. It’s not easy, but if you believe in something you have to try and make it work.

Start your engines

Anyway, the new category in the competition is a great opportunity to learn A-Frame if you haven’t tried it yet, or improve your skills. After all you have a full month, and there’s guaranteed swag for every entry. The theme this year is “lost” – I hope it will help you find a good idea for the game.

Visit js13kGames website for all the details, see the A-Frame category landing page, and follow @js13kgames on Twitter or on Facebook for announcements. The friendly js13kGames community can help you with any problems or issues you’ll face; they can be found on our js13kgames Slack channel. Good luck and have fun!

Daniel Stenbergkeep finding old security problems

I decided to look closer at security problems and the age of the reported issues in the curl project.

One theory I had when I started to collect this data, was that we actually get security problems reported earlier and earlier over time. That bugs would be around in public release for shorter periods of time nowadays than what they did in the past.

My thinking would go like this: Logically, bugs that have been around for a long time have had a long time to get caught. The more eyes we’ve had on the code, the fewer old bugs should be left and going forward we should more often catch more recently added bugs.

The time from a bug’s introduction into the code until the day we get a security report about it, should logically decrease over time.

What if it doesn’t?

First, let’s take a look at the data at hand. In the curl project we have so far reported in total 68 security problems over the project’s life time. The first 4 were not recorded correctly so I’ll discard them from my data here, leaving 64 issues to check out.

The graph below shows the time distribution. The all time leader so far is the issue reported to us on March 10 this year (2017), which was present in the code since the version 6.5 release done on March 13 2000. 6,206 days, just three days away from 17 whole years.

There are no less than twelve additional issues that lingered from more than 5,000 days until reported. Only 20 (31%) of the reported issues had been public for less than 1,000 days. The fastest report was reported on the release day: 0 days.

The median time from release to report is a whopping 2541 days.

When we receive a report about a security problem, we want the issue fixed, responsibly announced to the world and ship a new release where the problem is gone. The median time to go through this procedure is 26.5 days, and the distribution looks like this:

What stands out here is the TLS session resumption bypass, which happened because we struggled with understanding it and how to address it properly. Otherwise the numbers look all reasonable to me as we typically do releases at least once every 8 weeks. We rarely ship a release with a known security issue outstanding.

Why are very old issues still found?

I think partly because the tools are gradually improving that aid people these days to find things much better, things that simply wasn’t found very often before. With new tools we can find problems that have been around for a long time.

Every year, the age of the oldest parts of the code get one year older. So the older the project gets, the older bugs can be found, while in the early days there was a smaller share of the code that was really old (if any at all).

What if we instead count age as a percentage of the project’s life time? Using this formula, a bug found at day 100 that was added at day 50 would be 50% but if it was added at day 80 it would be 20%. Maybe this would show a graph where the bars are shrinking over time?

But no. In fact it shows 17 (27%) of them having been present during 80% or more of the project’s life time! The median issue had been in there during 49% of the project’s life time!

It does however make another issue the worst offender, as one of the issues had been around during 91% of the project’s life time.

This counts on March 20 1998 being the birth day. Of course we got no reports the first few years since we basically had no users then!

Specific or generic?

Is this pattern something that is specific for the curl project or can we find it in other projects too? I don’t know. I have not seen this kind of data being presented by others and I don’t have the same insight on such details of projects with an enough amount of issues to be interesting.

What can we do to make the bars shrink?

Well, if there are old bugs left to find they won’t shrink, because for every such old security issue that’s still left there will be a tall bar. Hopefully though, by doing more tests, using more tools regularly (fuzzers, analyzers etc) and with more eyeballs on the code, we should iron out our security issues over time. Logically that should lead to a project where newly added security problems are detected sooner rather than later. We just don’t seem to be at that point yet…

Caveat

One fact that skews the numbers is that we are much more likely to record issues as security related these days. A decade ago when we got a report about a segfault or something we would often just consider it bad code and fix it, and neither us maintainers nor the reporter would think much about the potential security impact.

These days we’re at the other end of the spectrum where we people are much faster to jumping to a security issue suspicion or conclusion. Today people report bugs as security issues to a much higher degree than they did in the past. This is basically a good thing though, even if it makes it harder to draw conclusions over time.

Data sources

When you want to repeat the above graphs and verify my numbers:

  • vuln.pm – from the curl web site repository holds security issue meta data
  • releaselog – on the curl web site offers release meta data, even as a CSV download on the bottom of the page
  • report2release.pl – the perl script I used to calculate the report until release periods.

Justin DolskePhoton Engineering Newsletter #12

Let’s get straight into update #12!

Oh, hey, anyone notice any icon chances recently? Yeah, they’re pretty wonderful. Or maybe I should say funderful? Looking forward to where they end up!

about-logo@2x

Speaking of looking forward, I’m going to be on vacation for the next two weeks. But fear not! Jared and Mike will be covering Photon updates, so you’ll still be able to get your Photon phix.

Recent Changes

Menus/structure:

Animation:

Preferences:

Visual redesign:

  • Updated the button positions in the navbar, and made them more customizable. (This was a contributor patch – thanks!)
  • Close buttons updated across the UI (also a contributor patch!)
  • The “Compact Light” and “Compact Dark” themes have been renamed to simply “Light” and “Dark”. (The UI density setting is already independent of the theme.)

Onboarding:

Performance:

 


Code SimplicityKindness and Code

It is very easy to think of software development as being an entirely technical activity, where humans don’t really matter and everything is about the computer. However, the opposite is actually true.

Software engineering is fundamentally a human discipline.

Many of the mistakes made over the years in trying to fix software development have been made by focusing purely on the technical aspects of the system without thinking about the fact that it is human beings who write the code. When you see somebody who cares about optimization more than readability of code, when you see somebody who won’t write a comment but will spend all day tweaking their shell scripts to be fewer lines, when you have somebody who can’t communicate but worships small binaries, you’re seeing various symptoms of this problem.

In reality, software systems are written by people. They are read by people, modified by people, understood or not by people. They represent the mind of the developers that wrote them. They are the closest thing to a raw representation of thought that we have on Earth. They are not themselves human, alive, intelligent, emotional, evil, or good. It’s people that have those qualities. Software is used entirely and only to serve people. They are the product of people, and they are usually the product of a group of those people who had to work together, communicate, understand each other, and collaborate effectively. As such, there’s an important point to be made about working with a group of software engineers:

There is no value to being cruel to other people in the development community.

It doesn’t help to be rude to the people that you work with. It doesn’t help to angrily tell them that they are wrong and that they shouldn’t be doing what they are doing. It does help to make sure that the laws of software design are applied, and that people follow a good path in terms of making systems that can be easily read, understood, and maintained. It doesn’t require that you be cruel to do this, though. Sometimes you do have to tell people that they haven’t done the right thing. But you can just be matter of fact about it—you don’t have to get up in their face or attack them personally for it.

For example, let’s say somebody has written a bad piece of code. You have two ways you could comment on this:

“I can’t believe you think this is a good idea. Have you ever read a book on software design? Obviously you don’t do this.”

That’s the rude way—it’s an attack on the person themselves. Another way you could tell them what’s wrong is this:

“This line of code is hard to understand, and this looks like code duplication. Can you refactor this so that it’s clearer?”

In some ways, the key point here is that you’re commenting on the code, and not on the developer. But also, the key point is that you’re not being a jerk. I mean, come on. The first response is obviously rude. Does it make the person want to work with you, want to contribute more code, or want to get better? No. The second response, on the other hand, lets the person know that they’re taking a bad path and that you’re not going to let that bad code into the codebase.

The whole reason that you’re preventing that programmer from submitting bad code has to do with people in the first place. Either it’s about your users or it’s about the other developers who will have to read the system. Usually, it’s about both, since making a more maintainable system is done entirely so that you can keep on helping users effectively. But one way or another, your work as a software engineer has to do with people.

Yes, a lot of people are going to read the code and use the program, and the person whose code you’re reviewing is just one person. So it’s possible to think that you can sacrifice some kindness in the name of making this system good for everybody. Maybe you’re right. But why be rude or cruel when you don’t have to be? Why create that environment on your team that makes people scared of doing the wrong thing, instead of making them happy for doing the right thing?

This extends beyond just code reviews, too. Other software engineers have things to say. You should listen to them, whether you agree or not. Acknowledge their statements politely. Communicate your ideas to them in some constructive fashion.

And look, sometimes people get angry. Be understanding. Sometimes you’re going to get angry too, and you’d probably like your teammates to be understanding when you do.

This might all sound kind of airy-fairy, like some sort of unimportant psychobabble BS. But look. I’m not saying, “Everybody is always right! You should agree with everybody all the time! Don’t ever tell anybody that they are wrong! Nobody ever does anything bad!” No, people are frequently wrong and there are many bad things in the world and in software engineering that you have to say no to. The world is not a good place, always. It’s full of stupid people. Some of those stupid people are your co-workers. But even so, you’re not going to be doing anything effective by being rude to those stupid people. They don’t need your hatred—they need your compassion and your assistance. And most of your co-workers are probably not stupid people. They are probably intelligent, well-meaning individuals who sometimes make mistakes, just like you do. Give them the benefit of the doubt. Work with them, be kind, and make better software as a result.

-Max

Cameron KaiserTime to sink the Admiral (or, why using the DMCA to block adblockers is a bad move)

One of the testing steps I have to do, but don't enjoy, is running TenFourFox "naked" (without my typical adblock add-ons) to get an assessment of how it functions drinking from the toxic firehose that is the typical modern ad network. (TL;DR: Power Macs run modern Web ads pretty poorly. But, as long as it doesn't crash.) Now to be sure, as far as I'm concerned sites gets to monetize their pages however they choose. Heck, there's ads on this blog, provided through Google AdSense, so that I can continue to not run a tip jar. The implicit social contract is that they can stick it behind a paywall or run ads beside them and it's up to me/you to decide whether we're going to put up with that and read the content. If we read it, we should pony up in either eyeballs or dinero.

This, of course, assumes that the ads we get served are reasonable and in a reasonable quantity. However, it's pretty hard to make money simply off per-click ads and networks with low CPM, so many sites run a quantity widely referred to as a "metric a$$ton" and the ads they run are not particularly selective. If those ads end up being fat or heavy or run scripts and drag the browser down, they consider that the cost of doing business. If, more sinisterly, they end up spying on or fingerprinting you, or worse, try to host malware and other malicious content, well, it's not their problem because it's not their ad (but don't block them all the same).

What the solution to this problem is not, is begging us to whitelist them because they're a good site. If you're not terribly discriminating about what ads you burden your viewers with, then how good can your site really be? The other non-solution is to offer effectively the Hobson's choice of "ads or paywall." What, the solution to the ads you don't curate is to give you my credit card number so you can be equally as careful with that?

So until this situation changes and sites get a little smarter about how they do sponsorship (let me call out a positive example: The Onion's sponsored content [slightly NSFW related article]), I don't have a moral problem with adblocking because really that's the only way to equalize the power dynamic. Block the ads on this blog if you want; I don't care. Click on them or not, your choice. In fact, for the Power Macs TenFourFox targets, I find an adblocker just about essential and my hats are off to those saints of the church who don't run one. Lots of current sites are molasses in January on barbituates without it and I can only improve this problem to a certain degree. Heck, they drag on my i7 MacBook Air. What chance does my iMac G4 have?

That's why this egregious abuse of statute is particularly pernicious: a company called Admiral, which operates an anti-adblocker, managed to use a DMCA request to Github to get the address of the site hosting their beacon image (to determine if you're blocking them or not) removed from the EasyList adblock listing. They've admitted it, too.

The legal theory, as I understand it (don't ask me to defend it), is that adblockers allow users to circumvent measures designed to "control access," which is a specific component of the American DMCA. (It is not, in fact, the case in Europe.) It might be more accurate to say that the components of adblockers that block adblocker blocking are primarily what they object to. (Uh, yo dawg.) Since the volunteer maintainers of EasyList are the weak link and the list they maintain is the one most adblockers use as a base, this single action gets them unblocked by most adblock extensions and potentially gives other ad networks a fairly big club to force compliance to boot.

The problem with this view, and it is certainly not universally shared, is that given that adblockers work by preventing certain components of the page from loading, theoretically anything that does not load the website completely as designed is therefore in violation. The famous text browser Lynx, for example, does not display images or run JavaScript, and since most ads and adblocker-blockers are implemented with images and JavaScript, it is now revealed as a sinister tool of the godless communist horde. NoScript blocks JavaScript on sites you select, and for the same reasons will cause the end of the American Republic. Intentionally unplugging your network cable at the exact moment when the site is pushing you a minified blob of JS crap -- or the more technically adept action of blackholing that address in your hosts file or on your router -- prevents the site from loading code to function in the obnoxious manner the ad network wants it to, and as a result is clearly treason. Notice that in all these examples the actual code of the site is not modified, just whether the client will process (or in the last example even just receive) and display it. Are all these examples "circumvention"?

This situation cannot stand and it's time for us independent browser maintainers to fight fire with fire. If Admiral isn't willing to back down, I'll issue the ultimatum that I will write code into TenFourFox to treat any of Admiral's web properties as malicious, and I encourage other browser maintainers to do the same. We already use Safe Browsing to block sites that try to load malicious code and we already generate warnings for sites with iffy credentials or bad certificates, so it's not a stretch to say that a site that actively attacks user choice is similarly harmful. The block will only be by default and a user that really wants to can turn it off, but the point will be made. I challenge Admiral to step up their game and start picking on people their own size if they really believe this is the best course of action.

And hey, even if this doesn't work, I should get lots of ad clicks from this, right? Right?






I'll get my coat.

QMOFirefox 56 Beta 4 Testday, August 18th

Hello dear Mozillians!

We are happy to let you know that Friday, August 18th, we are organizing Firefox 56 Beta 4 Testday. We’ll be focusing our testing on the following new features: Media Block Autoplay, Preferences Search [Photon] and Photon Preferences reorg V2.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

The Mozilla BlogHonoring Our Friend Bassel: Announcing the Bassel Khartabil Free Culture Fellowship

To honor Bassel Khartabil’s legacy and his lasting impact on the open web, a slate of nonprofits are launching a new fellowship in his name

 

By Katherine Maher (executive director, Wikimedia Foundation), Ryan Merkley (CEO, Creative Commons) and Mark Surman (executive director, Mozilla)

On August 1, 2017, we received the heartbreaking news that our friend Bassel (Safadi) Khartabil, detained since 2012, was executed by the Syrian government shortly after his 2015 disappearance. Khartabil was a Palestinian Syrian open internet activist, a free culture hero, and an important member of our community. Our thoughts are with Bassel’s family, now and always.

Today we’re announcing the Bassel Khartabil Free Culture Fellowship to honor his legacy and lasting impact on the open web.

Bassel Khartabil

Bassel was a relentless advocate for free speech, free culture, and democracy. He was the cofounder of Syria’s first hackerspace, Aiki Lab, Creative Commons’ Syrian project lead, and a prolific open source contributor, from Firefox to Wikipedia. Bassel’s final project, relaunched as #NEWPALMYRA, entailed building free and open 3D models of the ancient Syrian city of Palmyra. In his work as a computer engineer, educator, artist, musician, cultural heritage researcher, and thought leader, Bassel modeled a more open world, impacting lives globally.

To honor that legacy, the Bassel Khartabil Free Culture Fellowship will support outstanding individuals developing the culture of their communities under adverse circumstances. The Fellowship — organized by Creative Commons, Mozilla, the Wikimedia Foundation, the Jimmy Wales Foundation, #NEWPALMAYRA, and others — will launch with a three-year commitment to promote values like open culture, radical sharing, free knowledge, remix, collaboration, courage, optimism, and humanity.

As part of this new initiative, fellows can work in a range of mediums, from art and music to software and community building. All projects will catalyze free culture, particularly in societies vulnerable to attacks on freedom of expression and free access to knowledge. Special consideration will be given to applicants operating within closed societies and in developing economies where other forms of support are scarce. Applications from the Levant and wider MENA region are greatly encouraged.

Throughout their fellowship term, chosen fellows will receive a stipend, mentorship from affiliate organizations, skill development, project promotion, and fundraising support from the partner network. Fellows will be chosen by a selection committee composed of representatives of the partner organizations.

Says Mitchell Baker, Mozilla executive chairwoman: “Bassel introduced me to Damascus communities who were hungry to learn, collaborate and share. He introduced me to the Creative Commons community which he helped found. He introduced me to the open source hacker space he founded, where Linux and Mozilla and JavaScript libraries were debated, and the ideas of open collaboration blossomed. Bassel taught us all. The cost was execution. As a colleague, Bassel is gone. As a leader and as a source of inspiration, Bassel remains strong. I am honored to join with others and echo Bassel’s spirit through this Fellowship.”

Fellowship details

Organizational Partners include Creative Commons, #FREEBASSEL, Wikimedia Foundation, GlobalVoices, Mozilla, #NEWPALMYRA, YallaStartup, the Jimmy Wales Foundation, and SMEX.

Amazon Web Services is a supporting partner.

The Fellowships are based on one-year terms, which are eligible for renewal.

The benefits are designed to allow for flexibility and stability both for Fellows and their families. The standard fellowship offers a stipend of $50,000 USD, paid in 10 monthly installments. Fellows are responsible for remitting all applicable taxes as required.

To help offset cost of living, the fellowship also provides supplements for childcare and health insurance, and may provide support for project funding on a case-by-case basis. The fellowship also covers the cost of required travel for fellowship activities.

Fellows will receive:

  • A stipend of $50,000 USD, paid in 10 monthly installments
  • A one-time health insurance supplement for Fellows and their families, ranging from $3,500 for single Fellows to $7,000 for a couple with two or more children
  • A one-time childcare allotment of up to $6,000 for families with children
  • An allowance of up to $3,000 towards the purchase of laptop computer, digital cameras, recorders and computer software; fees for continuing studies or other courses, research fees or payments, to the extent such purchases and fees are related to the fellowship
  • Coverage in full for all approved fellowship trips, both domestic and international

The first fellowship will be awarded in April 2018. Applications will be accepted beginning February 2018.

Eligibility Requirements. The Bassel Khartabil Free Culture Fellowship is open to individuals and small teams worldwide, who:

  • Propose a viable new initiative to advance free culture values as outlined in the call for applicants
  • Demonstrate a history of activism in the Open Source, Open Access, Free Culture or Sharing communities
  • Are prepared to focus on the fellowship as their primary work

Special consideration will be given to applicants operating under oppressive conditions, within closed societies, in developing economies where other forms of support are scarce, and in the Levant and wider MENA regions.

Eligible Projects. Proposed projects should advance the free culture values of Bassel Khartabil through the use of art, technology, and culture. Successful projects will aim to:

  • Meaningfully increase free public access to human knowledge, art or culture
  • Further the cause of social justice/social change
  • Strive to develop both a local and global community to support its cause

Any code, content or other materials produced must be published and released as free, openly licensed and/or open-source.

Application Process. Project proposals are expected to include the following:

  • Vision statement
  • Bio and CV
  • Budget and resource requirements for the next year of project development

Applicants whose projects are chosen to advance to the next stage in the evaluation process may be asked to provide additional information, including personal references and documentation verifying income.

About Bassel

Bassel Khartabil, a Palestinian-Syrian computer engineer, educator, artist, musician, cultural heritage researcher and thought leader, was a central figure in the global free culture movement, connecting and promoting Syria’s emerging tech community as it existed before the country was ransacked by civil war. Bassel co-founded Syria’s first hackerspace, Aiki Lab, in Damascus in 2010. He was the Syrian lead for Creative Commons as well as a contributor to Mozilla’s Firefox browser and the Red Hat Fedora Linux operating system. His research into preserving Syrian archeology with computer 3D modeling was a seminal precursor to current practices in digital cultural heritage preservation — this work was relaunched as the #NEWPALMYRA project in 2015.

Bassel’s influence went beyond Syria. He was a key attendee at the Middle East’s bloggers conferences and played a vital role in the negotiations in Doha in 2010 that led to a common language for discussing fair use and copyright across the Arab-speaking world. Software platforms he developed, such as the open-source Aiki Framework for collaborative web development, still power high-traffic web sites today, including Open Clip Art and the Open Font Library. His passion and efforts inspired a new community of coders and artists to take up his cause and further his legacy, and resulted in the offer of a research position in MIT Media Lab’s Center for Civic Media; his listing in Foreign Policy’s 2012 list of Top Global Thinkers; and the award of Index on Censorship’s 2013 Digital Freedom Award.

Bassel was taken from the streets in March of 2012 in a military arrest and interrogated and tortured in secret in a facility controlled by Syria’s General Intelligence Directorate. After a worldwide campaign by international human rights groups, together with Bassel’s many colleagues in the open internet and free culture communities, he was moved to Adra’s civilian prison, where he was able to communicate with his family and friends. His detention was ruled unlawful by the United Nations Working Group on Arbitrary Detention, and condemned by international organizations such as Creative Commons, Amnesty International, Human Rights Watch, the Electronic Frontier Foundation, and the Jimmy Wales Foundation.

Despite the international outrage at his treatment and calls for his release, in October of 2015 he was moved to an undisclosed location and executed shortly thereafter — a fact that was kept secret by the Syrian regime for nearly two years.

The post Honoring Our Friend Bassel: Announcing the Bassel Khartabil Free Culture Fellowship appeared first on The Mozilla Blog.

Ehsan AkhgariQuantum Flow Engineering Newsletter #19

As usual, I have some quick updates to share about what we’ve been up to on improving the performance of the browser in the past week or so.  Let’s first look at our progress on the Speedometer benchmark.  Our performance goal for Firefox 57 was to get within 20% of Chrome’s benchmark score on our Acer reference hardware on Win64.  Those of you who watch the Firefox Health Dashboards every once in a while may have noticed that now we are well within that target:

Speedometer Progress Chart from the Firefox Health Dashboard, within 14.86% of Chrome's benchmark score

It’s nice to see the smiley face on this chart, finally!  You can see the more detailed downward slope on the AWFY graph that shows the progress in the past couple of weeks or so (dark red dots are PGO builds, orange dots are non-PGO builds, and of course green in Chrome):

Detailed Speedometer progress in the past couple of weeks on Win64 (Acer reference hardware)The situation on Win32 is a bit worse, due to Chrome’s recent switch to use clang-cl on Windows instead of MSVC which gave them an around 30% speed boost on the 32-bit Speedometer score, but we have made progress nonetheless.  Such is the nature of tracking moving targets!

Speedometer progress chart on Win32The other performance aspect to have a look at again is our progress at eliminating slow synchronous IPC calls.  I last wrote about this about three weeks ago, and since then at least one major change happened: the infamous document.cookie synchronous IPC call was eliminated, so I figured it may be a good time to look at the data again.

Sync IPC Analysis for 2017-08-10Telemetry data is laggy since it includes data from older versions of Nightly, but if you compare this to the previous chart, there should be a stark difference visible: PCookieService::Msg_GetCookieString is now a much smaller part of the overall data (at around 26.1%).  Looking at the list of the top ten messages, the next ones in order are the usual suspects for those who have followed these newsletters for a while: some JS initiated IPC, PAPZCTreeManager::Msg_ReceiveMouseInputEvent, followed by more JS IPC, followed by PBrowser::Msg_NotifyIMEFocus, followed by even more JS IPC, followed by 2 new messages that are now surfacing as we’ve fixed the worst ones of these: PDocAccessible::Msg_SyncTextChangeEvent which is related to accessibility and the data shows it affects a relatively small number of sessions due to its low submission rate, and PContent::Msg_ClassifyLocal, which probably comes from turning the Flash plugin click-to-play by default.

Now let’s look at the breakdown of synchronous IPC messages initiated from JS:

JS Sync IPC Analysis for 2017-08-10

The story here remains unchanged: most of the sync IPC messages we’re seeing come from legacy extensions, and there is also the contextmenu sync IPC, which has a patch pending review.  However, the picture here may start changing quite soon.  You may have seen the recent announcement about legacy extensions being disabled on Nightly starting from tomorrow, so hopefully this data (and the C++ sync IPC data) will soon start to shift to reflect more of the performance characteristics that our users on the release channel will experience for Firefox 57.

Now please let me to acknowledge the great work of those who made Firefox faster last week.  I hope I’m not forgetting any names!

Mozilla Addons BlogWebExtensions in Firefox 56

Firefox 56 landed in Beta this week, so it’s time for another update on the WebExtensions transition. Because the development period for this latest release was about twice as long as normal, we have many more updates. Documentation for the APIs discussed here can be found on MDN Web Docs.

API changes

The browsingData API can now remove cookies by host. The initial implementation of browsingData has landed for Android with support for the settings and removeCookies APIs.

The contextMenus API also has a few improvements. The text of the link is now included in the onClickData event and text selection is no longer limited to 150 characters. Optional permission requests can now also be triggered from context menus.

An alternative, more general namespace was added, called browser.menus. It supports the same API and all existing menu contexts, plus a new one that allows you to add items to the Tools menu. You can also provide different icons for your menu items. For example:

browser.menus.create({
  id: "sort-tabs",
  title: "A-Z",
  contexts: ["tools_menu"],
  icons: {
   16: "icon-16-context-menu.png",
  },
});



The windows API now has the ability to read and preface the title of the window object, by passing titlePreface to the window object. This allows extensions to label different windows so they’re easier to distinguish.

The downloads.open API now requires user interaction to be called. This mirrors the Chrome API which also requires user interaction. You can now download a blob created in a background page.

The tabs API has new printing APIs. The tabs.print, tabs.printPreview and tabs.saveAsPDF (not on Mac OS X) methods will bring up the respective print dialogs for the page. The tabs.Tab object now includes the time the tab was lastAccessed.

The webRequests API can now monitor web socket connections (but not the messages)  by specifying ws:// or wss:// in the match pattern. Similarly the match patterns now support moz-extension URLs, however this only applies to the same extension. Importantly a HTTP 302 redirection to a moz-extension page will now work. For example, this was a common use case for extensions that integrated with OAuth.

The pageActions API can now be shown on a per tab basis on Android.

The privacy API gained two new APIs. The privacy.services.passwordSavingEnabled API allows an extension to toggle the preferences that control password saving. The privacy.websites.referrersEnabled API allows an extension to toggle the preferences that control the sending of HTTP Referrer headers.

A new API to control browserSettings has been added with an API to disable the browser’s cache. We’ll use this API for similar settings in the future.

In WebExtensions, we manage the changing of preferences and effects when extensions get uninstalled. This management was applied to chrome_url_overrides. The same management now prevents extensions overriding user changed preferences.

The theming API gained a reset method which can be called after an update to reset Firefox to the default theme.

The proxy API now has the ability to clear out a previously registered proxy.

If you’d like store a large amount of data in indexedDB (something we recommend over storage.local) then you can do so by requesting the unlimitedStorage permission. Requesting this will stop indexedDB prompting the user for permission to store a large amount of data.

The management API has added get and getAll commands. This allows extensions to query existing add-ons to spot any potential conflicts with other content.

Finally, the devtools.panels.elements.onSelectionChanged API landed and extensions that use the developer tools will find that their panels open faster.

Out of process extensions

We first mentioned out of process extensions back in the WebExtensions in Firefox 52 blog post. They’ve been a project that started back in 2016, but they have now been turned on for Windows users in Firefox 56. This is a huge milestone and a lot of work from the team.

This means that all the WebExtensions will run in their own process (that’s one process for all extensions). This has many advantages, but chief among them are  performance, security, and crash handling. For example, a crash in a WebExtension will no longer bring down Firefox. Content scripts from WebExtensions are still handled by the content process.

With the new WebExtensions architecture this change was completed with zero changes by extension developers, a significant improvement over the legacy extension environment.

There are some remaining bugs on Linux and OS X that prevent us from enabling it there, but we hope to enable those in the coming releases.

Along with measuring the performance of out of process, we’ve added in multiple telemetry signals to measure the performance of WebExtensions.  For example, it was recently found that storage.local.set was slow. With some improvements, we’ve seen a significant performance boost from a median of over 200ms down to around 25ms:

These telemetry measures conform to the standard Mozilla telemetry guidelines.

about:debugging

The about:debugging page got some more improvements:

The add-on ID has been added to the page. If there’s a warning about processing the add-on, that will now be shown next to the extension. Perhaps most useful to those working on their first add-on, if an add-on fails to load because of a problem, then no problem—there’s now an easy “retry” button for you to press:

Contributors

Thank you once again to our many contributors for this release, especially our volunteers including: Cameron Kaiser, dw-dev, Giorgio Maone, Swapnesh Kumar Sahoo, Timothy Johnson, Tushar Saini and Tomislav Jovanovic.

Update: improved the quality of the image for context menus.

The post WebExtensions in Firefox 56 appeared first on Mozilla Add-ons Blog.

Firefox Test PilotMy Summer Internship with Firefox Test Pilot

As part of my internship, I participated in a range of activities with the Test Pilot team, including in-home interviews with people who use Test Pilot experiments.

This past summer I had the opportunity to work as an intern on the Firefox Test Pilot team. Upon joining the team I was informed of my summer project: a web experiment to allow for private and secure file transfers.

Firefox Test Pilot experiments tend to act as supplemental features available for the browser, but this project was different. It was necessary to move this experiment to the web because we felt it would be too restricting to force both file senders and file recipients to use Firefox. After much reworking and refactoring, the project was finally released as Send.

Defining Standards

One of the most important things we needed to do before we started writing code was to define exactly what we meant by “private” and “secure file transfer”. Different people can have greatly varied perceptions of what constitutes a satisfactory level of privacy, so it was essential to define our standards before starting the project so as to not compromise our users’ privacy.

Our goal was to make a product that would allow users to share files anonymously and without fear that a third party could snoop in on the transfer. At first we considered WebRTC to allow for peer-to-peer connections, but decided against it in the end as it wasn’t entirely reliable for larger file sizes. It also would have been a hassle for users as it would require both the sender and recipient to keep the browser tab open for the entire duration of transfer.

Without peer-to-peer connections, we decided to host files on Amazon’s S3. We also decided to encrypt the file using client-side cryptography libraries to prevent Mozilla or any third-party from ever seeing the contents of the file. We settled on appending secret 128-bit AES-GCM keys as a hash field on a generated URL that a sender could then share to a recipient. We would then pipe the upload of the encrypted file through our servers to an S3 bucket.

The use of the hash parameter in the URL means the key is never sent to the server and ensures that the file stays encrypted until the recipient’s browser downloads the entire file. We believe that the use of client-side encryption and decryption greatly mitigates any possible information leakage while sharing files, and as a result, would be satisfactory for almost all use cases.

We also decided on adding in auto-expiry of files after twenty-four hours to prevent a user’s files from lingering online indefinitely. We felt that this approach would both provide sufficient privacy and also be seamless enough to satisfy all users of Send.

Building an MVP

I spent the next couple of weeks building a minimal viable product that we could provide to the Legal and Security teams for review. This involved multiple rounds of hacking together some code, and then organizing it into logical modules. The modules became especially important as more teams came together to start working on the experiment.

After finishing a basic working version of the experiment, we decided to put it through some UX testing. We video conferenced with several research participants and sent their feedback to our Taipei UX team, who eventually fleshed out the UX that is used in Send today. After building a working version of the application, I wrote several test cases to ensure that future code edits would not break the file transfer modules and the server side API.

Coordinating with Different Teams

One of the biggest skills that this internship has helped me develop is the ability to coordinate with multiple teams to get a feature implemented. For example, I worked with the Security and Legal teams to make sure Send met Mozilla’s high standards for release (even for a Test Pilot experiment), as well as tackled new bugs found by the Quality Assurance team. I also had the opportunity to work with our Operations team to make sure our production environment was set up correctly. Around the same time, I finished writing frontend and backend tests for the Send app, and we were able to finalize a UI.

By early July, most of the major features of Send had been implemented, so it was mostly an issue of adding metrics, refactoring code, and adding localization scripts. I had to add in Mozilla’s L20n library which meant refactoring a great deal of code so the localization team could work with it. At the same time, I was working on anonymous metrics collection, so I had to work with the Operations team to set up error reporting and analytics addressing correctly. I’ve learned a lot of technical skills this summer, but I also learned a great deal of social skills.

The Launch of Send

After around three months of work, Send released to the Test Pilot website. This was probably the best part of my internship with Test Pilot: I was able to work on a product from its inception all the way to release. The work was fast-paced and I had to make lots of revisions to get code cleared by various teams, but in the end it was extremely rewarding. During the days approaching the release date, I was nervous as this was the first project in which I was a major contributor that would be publicly released.

Although I knew the Mozilla name would garner some interest, the release of Send was met with more fanfare than I could have imagined, being reported on by several different large tech-oriented media outlets. I felt very proud of what I had accomplished! Before this summer started, I never would have guessed that a product I developed would be used by people around the world.


My Summer Internship with Firefox Test Pilot was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Gervase MarkhamHow One Tweet Can Ruin Your Life

This video is pretty awesome throughout, but the pinnacle is at the end:

The great thing about social media was how it gave a voice to voiceless people, but we’re now creating a surveillance society, where the smartest way to survive is to go back to being voiceless. Let’s not do that. — Jon Ronson

Mozilla Addons BlogUpcoming Changes in Compatibility Features

Firefox 57 is now on the Nightly channel (along with a shiny new logo!). And while it isn’t disabling legacy add-ons just yet, it will soon. There should be no expectation of legacy add-on support on this or later versions. In preparation for Firefox 57, a number of compatibility changes are being implemented on addons.mozilla.org (AMO) to support this transition.

Upcoming Compatibility Changes

  • All legacy add-ons will have strict compatibility set, with a maximum version of 56.*. This is the end of the line for legacy add-on compatibility. They can still be installed on Nightly with some preference changes, but may break due to other changes happening in Firefox.
  • Related to this, you won’t be able to upload legacy add-ons that have a maximum version set higher than 56.*.
  • It will be easier to find older versions of add-ons when the latest one isn’t compatible. Some developers will be submitting ports to the WebExtensions API that depend on very recent API developments, so they may need to set a minimum version of 56.0 or 57.0. That can make it difficult for users of older versions of Firefox to find a compatible version. To address this, compatibility filters on search will be off by default. Also, we will give more prominence to the All Versions page, where older versions of the add-on are available.
  • Add-ons built with WebExtensions APIs will eventually show up higher on search rankings. This is meant to reduce instances of users installing add-ons that will break within a few weeks.

We will be rolling out these changes in the coming weeks.

Add-on compatibility is one of the most complex AMO features, so it’s possible that some things won’t work exactly right at first. If you run into any compatibility issues, please file them here.

The post Upcoming Changes in Compatibility Features appeared first on Mozilla Add-ons Blog.

Air MozillaReps Weekly Meeting Aug. 10, 2017

Reps Weekly Meeting Aug. 10, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Air MozillaMozilla Science Lab August 2017 Bi-Monthly Community Call

Mozilla Science Lab August 2017 Bi-Monthly Community Call Mozilla Science Lab August 2017 Bi-Monthly Community Call

Dzmitry MalyshauOverhead analysis for Vulkan Portability

One of the design goals for the portability API is to keep any overhead (when translating to other APIs) to be minimum, optimally providing a zero-cost abstraction. In this article, we’ll dissect the potential sources of the overhead into groups and analyze the prospect of each, suggesting possible solutions. The problem in question is very broad, but we’ll spice it with examples raised in Vulkan Portability Initiative.

Another unit of compilation

When adding an indirection layer from inside a program, given a language with zero-cost abstractions like C++ or Rust, it is possible to have the layer completely optimized away. However, the library will be provided as a static/dynamic binary, which would prevent the linker to inline the calls. That means doubling the cost of a function invocation (as opposed to execution) compared to a native API.

Solutions:

  • whole program optimization
  • pure header library
    • locks into using C/C++
    • inconvenient
    • long compile times

Native API differences

Some aspects of the native APIs don’t exactly match together. This is amplified by the flexible nature of Vulkan, which tends to provide the richest feature set comparing to D3D12 and Metal.

For example, Vulkan allows command buffers to be re-used, and so does D3D12. In Metal, however, it’s not directly supported. If this ability is exposed unconditionally, the Metal backend would have to record all the encoded commands on the side to translating them to the corresponding MTL*CommandEncoder interface.

When the user requests to use the command buffer again, Metal backend would have to re-encode the native command buffer on the spot, which means a considerable delay to otherwise inexpensive operation of submitting a command buffer for execution.

Solutions:

  • more granularity of device capabilities
  • pressure other platforms to add native support for missing features

Skewed idiomaticity

An API typically is associated with a certain way of thinking and approaching the problems to solve. Providing a Vulkan-like front-end to an “alien” API would skew the users to think in terms of Vulkan and what is efficient in it, as opposed to the native APIs.

For example, Vulkan has render sub-passes. Organizing the graphics workload in sub-passes allows tiled hardware (typically found in mobile devices) to re-use intermediate rendering results without a road-trip to VRAM. This can be a big optimization, yielding up to 50% performance increase as well as reduced power usage.

No other API has sub-passes. It is straightforward to emulate them by treating each sub-pass as an independent pass. However, ignoring the fact of intermediate results go back and forth to VRAM would cause the graphics pipeline to wait for these transfers and stall. With a non-multi-pass approach the user would insert an independent graphics job between producers and consumers of data, just to hide the memory latency by not immediately waiting for the producer job to finish.

When the user writes for Vulkan exclusively, they can have a firm believe that the driver optimizes the sub-passes (e.g. by reordering the work) for the non-tiled hardware. When Vulkan translates to D3D12 and Metal, there is no such luxury.

Solutions:

  • device feature
    • either a soft one, serving as a hint that sub-passes are efficient
    • or a hard one, for allowing the use of multiple sub-passes in general
  • pray that other native APIs will catch up one day

Conclusion

We categorized the possible sources of performance overhead by ascending the ladder of abstraction. We started from the low level compiler units, proceeded through the warts of API translation, and finished on idiomatical differences of native APIs.

There are no good solutions to these problems. Our task (as a technical subgroup) is to strike for the balance between making minimal diversion from Vulkan and providing optimal performance on other backends, while keeping the API simple and explicit.

Hub FiguièreStatus update, August 2017

Work:

In March I joined the team at eyeo GmbH, the company behind Adblock Plus, as a core developer. Among other things I'm improving the filtering capabilities.

While they are based in Cologne, Germany, I'm still working remotely from Montréal.

It is great to help making the web more user centric.

Personal project:

I started working again on Niepce, currently implementing the file import. I also started to rewrite the back-end in Rust. The long term is to move completely to Rust, this will happen in parallel with feature implementation.

This and other satellite projects are part of my great plan I have for digital photography on Linux with GNOME.

'til next time.

Nick CameronWhat the RLS can do

IDE support for Rust is one of the most requested features in our surveys and is a key part of Rust's 2017 roadmap. Here, I'm going to talk about one of the things we're doing to bring Rust support to IDEs - the RLS.

Programmers can be pretty picky about their editors, so we want to support as broad a selection of editors as possible. A key step towards that goal is implementing the Rust Language Server (RLS). The RLS is a service for providing information about Rust programs. It works with multiple sources of data, primarily the Rust compiler. It communicates with editors using the Language Server Protocol (LSP) so that clients can perform actions such as 'code completion', 'jump to definition', and 'find all references'.

The intention is that the RLS will support multiple clients. Any editor that wants to provide IDE-type functionality for Rust programs can use the RLS. In fact many editors can get fairly good support by just using a generic LSP client plugin; but you get the best results by using a dedicated Rust client.

We've been working on two very different clients. One is a Visual Studio Code plugin which makes VSCode a Rust IDE. The other is rustw, an experimental web-app for building and exploring Rust programs. Rustw might become a useful tool in its own right or be used to browse source code in Rustdoc. We're also working on a new version of Rustdoc that uses the RLS, rather than being tightly integrated with the compiler.

Visual Studio Code with RLS

rustw - errors
rustw - code browsing

I plan to follow-up this blog post with another going over the RLS internals, for RLS client implementors and RLS contributors. In this post, I'll cover the fun stuff - features that will improve your life as a Rust developer.

Type and docs on hover

Hover over an identifier in VSCode or rustw to see its type and documentation. You'll get a link to rustdoc and the source code for standard library types too.

VSCode - type on hover

Semantic highlighting

When you hover a name in rustw or click inside one in VSCode, we highlight other uses of the same name. Because this is powered by the compiler, we can be smart about this and show exactly the right uses, for example, skipping different variables with the same name.

VSCode - selection highlighting

Code completion

Code completion is where an IDE suggests variable, field, or method names for you based on what you're typing. The RLS uses Racer behind the scenes for code completion, but clients don't need to be aware of this. In the long-term, the compiler should power code completion.

VSCode - code completion

Jump to definition

Jump from the use of a name to where it is defined. This is a key feature for IDEs and code exploration tools. Use F12 in VSCode or a left click in rustw. This works for variables, fields, methods, functions, modules, and more. Lifetimes and macros should work in the next few months.

Find all references

Find all uses of an item throughout a program. The RLS is smart enough to know about different items with the same name, to understand generic types, see through macros, and see references in different crates.

VSCode - find all refs

Find impls

Find all implementations (impls) for a trait or concrete type.

VSCode - find all refs

Apply suggestions

VSCode displays errors from building your program and highlights the errors in your code with squiggles. For some errors, you can apply a suggestion to quickly fix the error. We're working on expanding the errors which support such suggestions.

VSCode - apply error suggestion

Go to symbol

Search for declarations in a file then jump to them.

VSCode go to symbol

Identifier search

Search for a name - we'll show you definitions and uses.

rustw - identifier search

Renaming

The most fundamental refactoring - rename an item. The RLS will rename the definition and all uses, without touching different items with the same name.

VSCode - rename

Note how both instances of the digest variable are renamed but the field with the same name is not.

Deglob refactoring

Replace a glob import (use foo::*;) with a list import of the names actually imported. You can find this refactoring in the command palette in VSCode.

VSCode - deglob

Reformatting

The RLS uses Rustfmt to reformat your code.

VSCode - reformat

Trying out the RLS

The best way to try out the RLS is in Visual Studio Code:

  • download vscode
  • install the Rust (RLS) extension by typing ext install rust into the command pallette (ctrl + P)
  • open a Rust project (i.e., a folder containing Cargo.toml), and then a Rust file

You'll need to be using rustup. See the extension's home in the VSCode marketplace for more information.

Future plans

The RLS beta is currently available using Rustup with the nightly toolchain. Soon we should extend that support to beta and stable toolchains. Note that beta vs 1.0 for the RLS is separate to the Rust toolchain being used.

There's pretty much an infinite number of features we could support in an IDE. Also, there are lots of different ways to construct a Rust program, so there are a lot of edge cases and thus a lot of robustness work to do too. However, I hope we are ready to announce 1.0 for the RLS around the end of 2017 or start of 2018.

The Visual Studio Code extension will continue to evolve. We have no big changes planned, but plan to iteratively improve and make regular releases. Hopefully RLS support will appear in other editors soon. Rustw will hopefully evolve into a more full-featured code browser and will be used in rustdoc as well as some other places.

Helping out

Let us know if you encounter any problems by filing issues on the RLS repo.

If you'd like to help by writing code, tests, or docs, then have a look at the repos for the RLS, our VSCode extension, or rustw. Or come talk to us on IRC in #rust-dev-tools. We would love for you to help us out!

Dave TownsendNew Firefox and Toolkit module peers in Taipei!

Please join me in welcoming three new peers to the Firefox and Toolkit modules. All of them are based in Taipei and I believe that they are our first such peers which is very exciting as it means we now have more global coverage.

  • Tim Guan-tin Chien
  • KM Lee Rex
  • Fred Lin

I’ve blogged before about the things I expect from the peers and while I try to keep the lists up to date myself please feel free to point out folks you think may have been passed over.

Firefox Test PilotSay Hi to Send 1.1.0

We’re excited to announce the arrival of Send 1.1.0. Send now supports Microsoft Edge and Safari! In addition to expanded browser support, we’ve made several other improvements:

  • You can now send files from iOS (results may vary with receiving on iOS).
  • We no longer send file hashes to the server.
  • We fixed a bug that let users accidentally cancel downloads mid-stream.
  • You can now copy to clipboard from a mobile device, and we detect if copy-to-clipboard is disabled.
  • We now ship in 36 languages!

Right now we’re working on a raft of minor fixes, before moving on to larger features such as PIN protected files and multi-file uploads. We’re hoping to maintain a steady shipping schedule in the coming weeks even though we’re losing our beloved interns. I’ll post about performance and feature improvements as they ship.


Say Hi to Send 1.1.0 was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Air MozillaThe Joy of Coding - Episode 109

The Joy of Coding - Episode 109 mconley livehacks on real Firefox bugs while thinking aloud.

Mozilla Open Innovation TeamThe Mozilla Information Trust Initiative: Building a movement to fight misinformation online

Today, we are announcing the Mozilla Information Trust Initiative (MITI) — a comprehensive effort to keep the Internet credible and healthy. Mozilla is developing products, research, and communities to battle information pollution and so-called ‘fake news’ online. And we’re seeking partners and allies to help us do so.

Here’s why.

Imagine this: Two news articles are shared simultaneously online.

The first is a deeply reported and thoroughly fact checked story from a credible news-gathering organization. Perhaps Le Monde, the Wall Street Journal, or Süddeutsche Zeitung.

The second is a false or misleading story. But the article is designed to mimic content from a credible newsroom, from its headline to its dissemination.

How do the two articles fare?

The first article — designed to inform — receives limited attention. The second article — designed for virality — accumulates shares. It exploits cognitive bias, belief echos, and algorithmic filter bubbles. It percolates across the Internet, spreading misinformation.

This isn’t a hypothetical scenario — it’s happening now in the U.S., in the U.K., in France, in Germany, and beyond. The Pope did not endorse a U.S. presidential candidate, nor does India’s 2000-rupee note contain a tracking device. But fabricated content, misleading headlines, and false context convinced millions of Internet users otherwise.

The impact of misinformation on our society is one of the most divisive, fraught, and important topics of our day. Misinformation depletes transparency and sows discord, erodes participation and trust, and saps the web’s public benefit. In short: it makes the Internet less healthy. As a result, the Internet’s ability to power democratic society suffers greatly.

This is why we’re launching MITI. We’re investing in people, programs, and projects that disrupt misinformation online.

Why Mozilla? The spread of misinformation violates nearly every tenet of the Mozilla Manifesto, our guiding doctrine. Mozilla has a long history of putting community and principles first, and devoting resources to urgent issues — our Firefox browser is just one example. Mozilla is committed to building tolerance rather than hate, and building technology that can protect individuals and the web.

So we’re drawing on the unique depth and breadth of the Mozilla Network — from journalists and technologists to policymakers and scientists — to build functional products, research, and community-based solutions.

Misinformation is a complex problem with roots in technology, cognitive science, economics, and literacy. And so the Mozilla Information Trust Initiative will focus on four areas:

Product

Mozilla’s Open Innovation team will work with like-minded technologists and artists to develop technology that combats misinformation.

Mozilla will partner with global media organizations to do this, and also double down on our existing product work in the space, like Pocket, Focus, and Coral. Coral is a Mozilla project that builds open-source tools to make digital journalism more inclusive and more engaging.

Literacy

We can’t solve misinformation with technology alone — we also need to educate and empower Internet users, as well as those leading innovative literacy initiatives.

Mozilla will develop a web literacy curriculum that addresses misinformation, and will continue investing in existing projects like the Mission: Information teaching kit.

Research

Misinformation in the digital age is a relatively new phenomenon. To solve such a thorny problem, we first need to fully understand it.

Later this year, Mozilla will be releasing original research on how misinformation impacts users’ experiences online. We will be drawing on a dataset of user-level browsing data gathered during the 2016 U.S. elections.

Creative interventions

Mozilla will field and fund pitches from technologists who are combatting misinformation using various mediums, including virtual reality and augmented reality. It’s an opportunity to apply emerging technology to one of today’s most pressing issues.

Imagine: an augmented reality web app that uses data visualization to investigate misinformation’s impact on Internet health. Or, a virtual reality experience that takes users through the history of misinformation online.

Mozilla will also support key events in this space, like Media Party Argentina, the Computation+Journalism Symposium, the Online News Association, the 22×20 summit, and a MisinfoCon in London as part of MozFest. (To learn more about MozFest — Mozilla’s annual, flagship event devoted to Internet health — visit mozillafestival.org.)

We’re hoping to hear from and work with partners who share our vision. Please reach out to Phillip Smith, Mozilla’s Senior Fellow on Media, Misinformation & Trust, at miti@mozilla.com to get involved.

More than ever, we need a network of people and organizations devoted to understanding, and combatting, misinformation online. The health of the Internet — and our societies — depends on it.

This post was originally published on The Mozilla Blog.


The Mozilla Information Trust Initiative: Building a movement to fight misinformation online was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Dzmitry MalyshauRusty Object Notation

JavaScript. The practice-oriented language made scripting the Web possible for millions of programmers. It grew an ecosystem of libraries, even started attacking the domains seemingly independent of the Web, such as: native applications (Node.js) and interchange formats (JSON).

There is a lot not to like in JSON, but the main issue here is the lack of semantics. JavaScript doesn’t differentiate between a map and a struct, so any other language using JSON has to suffer. If only we had an interchange format made for a semantically strong language, preferably modern and efficient… like Rust. Here comes Rusty Object Notation - RON.

RON aims to be a superior alternative to JSON/YAML/TOML/etc, while having consistent format and simple rules. RON is a pleasure to read and write, especially if you have 5+ years of Rust experience. It has support for structures, enums, tuples, homogeneous maps and lists, comments, and even trailing commas!

We are happy to announce the release of RON library version 0.1. The implementation uses serde for convenient (de-)serialization of your precious data. It has already been accepted as the configuration format for Amethyst engine. And we are just getting started ;)

RON has been designed a few years ago, to be used for a game no longer in development. The idea rested peacefully until one shiny day torkleyy noticed the project and brought it to life. Now the library is perfectly usable and solves the question of readable data format for all of my future projects, and I hope - yours too!

Air MozillaWeekly SUMO Community Meeting August 9, 2017

Weekly SUMO Community Meeting August 9, 2017 This is the sumo weekly call

Air MozillaBugzilla Project Meeting, 09 Aug 2017

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Mozilla Addons BlogFriend of Add-ons: Santosh Viswanatham

Our newest Friend of Add-ons is Santosh Viswanatham! Santosh attended a regional event hosted by Mozilla Rep Srikar Ananthula in 2012 and has been an active leader in the community ever since.  Having previously served as a Firefox Student Ambassador and Regional Ambassador Lead, he is currently a Tech Speaker and a member of the Mozilla Campus Clubs Advisory Committee, where he is helping develop an activity for building extensions for Firefox.

Santosh has brought his considerable enthusiasm for open source software to the add-ons community. Earlier this year, he served a six-month term as a member of the Featured Add-ons Advisory Board, where he helped nominate and select extensions to be featured on addons.mozilla.org each month. Additionally, Santosh hosted a hackathon in Hyderabad, India, where 100 developers spent the night creating more than 20 extensions.

When asked to describe his experience contributing to Mozilla, Santosh says:

“It has been a wonderful opportunity to work with like-minded incredible people. Contributing to Mozilla gave me an opportunity to explore myself and stretched my limits working around super cool technologies. I learned tons of things about technology and communities, improved my skill set, received global exposure, and made friends for a lifetime by contributing to Mozilla.”

In his free time, Santosh enjoys dining out at roadside eateries, spending time with friends, and watching TV shows and movies.

Congratulations, Santosh, and thank you for all of contributions!

Are you a contributor to the add-ons community or know of someone who should be recognized? Please be sure to add them to our Recognition Wiki!

The post Friend of Add-ons: Santosh Viswanatham appeared first on Mozilla Add-ons Blog.

Daniel StenbergSome things to enjoy in curl 7.55.0

In this endless stream of frequent releases, the next release isn’t terribly different from the previous.

curl’s 167th release is called 7.55.0 and while the name or number isn’t standing out in any particular way, I believe this release has a few extra bells and whistles that makes it stand out a little from the regular curl releases, feature wise. Hopefully this will turn out to be a release that becomes the new “you should at least upgrade to this version” in the coming months and years.

Here are six things in this release I consider worthy some special attention. (The full changelog.)

1. Headers from file

The command line options that allows users to pass on custom headers can now read a set of headers from a given file.

2. Binary output prevention

Invoke curl on the command line, give it a URL to a binary file and see it destroy your terminal by sending all that gunk to the terminal? No more.

3. Target independent headers

You want to build applications that use libcurl and build for different architectures, such as 32 bit and 64 bit builds, using the same installed set of libcurl headers? Didn’t use to be possible. Now it is.

4. OPTIONS * support!

Among HTTP requests, this is a rare beast. Starting now, you can tell curl to send such requests.

5. HTTP proxy use cleanup

Asking curl to use a HTTP proxy while doing a non-HTTP protocol would often behave in unpredictable ways since it wouldn’t do CONNECT requests unless you added an extra instruction. Now libcurl will assume CONNECT operations for all protocols over an HTTP proxy unless you use HTTP or FTP.

6. Coverage counter

The configure script now supports the option –enable-code-coverage. We now build all commits done on github with it enabled, run a bunch of tests and measure the test coverage data it produces. How large share of our source code that is exercised by our tests. We push all coverage data to coveralls.io.

That’s a blunt tool, but it could help us identify parts of the project that we don’t test well enough. Right now it says we have a 75% coverage. While not totally bad, it’s not very impressive either.

Stats

This release ships 56 days since the previous one. Exactly 8 weeks, right on schedule. 207 commits.

This release contains 114 listed bug-fixes, including three security advisories. We list 7 “changes” done (new features basically).

We got help from 41 individual contributors who helped making this single release. Out of this bunch, 20 persons were new contributors and 24 authored patches.

283 files in the git repository were modified for this release. 51 files in the documentation tree were updated, and in the library 78 files were changed: 1032 lines inserted and 1007 lines deleted. 24 test cases were added or modified.

The top 5 commit authors in this release are:

  1. Daniel Stenberg
  2. Marcel Raad
  3. Jay Satiro
  4. Max Dymond
  5. Kamil Dudka

Cameron KaiserAnd now for several things that are completely different: Vintage Computer Festival aftermath, I pass a POWER9 kidneystone, and isindex isdead which issad

So, you slugs who didn't drag yourselves to the Computer History Museum in Mountain View for this year's Vintage Computer Festival West, here's what you didn't see (and here's what you didn't see last year).

You didn't see me cram two dorm refrigerator-sized Apple servers and a CRT monitor into my Honda Civic,

you didn't see my Apple Network Server exhibit, complete with a Shiner HE prototype and twin PowerBook 2300 Duos and an Outbound notebook serving as clients,
you didn't see a functioning Xerox Alto,
you didn't see SDF's original AT&T 3B2,
you didn't see Bil Herd, Leonard Tramiel and other old Commodore luminaries talking about then and now,
you didn't see a replica "CADET" IBM 1620, "just as it was" in 1959 (the infamous system that used lookup tables for addition rather than a proper adder, hence the acronym's alternative expansion as "Can't Add, Doesn't Even Try"),
you didn't see a JLPGA PowerBook 170 signed by John Sculley,
you didn't see a prototype dual G4 PowerBook,
you didn't see a prototype Mac mini with an iPod dock (and an amusing FAIL sticker),
you didn't see components from the Cray-1 supercomputer,
you didn't see this 6502-based astrology system in the consignment section, of the same model used by Nancy Reagan's astrologer Joan Quigley,
and you didn't see me investigate this Gbike parked out front, possibly against company policy.
You could have, if you had come. But now it's too late. Try again next year.

But what you still have a chance to see is your very own Talos II POWER9 workstation under your desk, because preorders opened today. Now, a reminder: I don't work for Raptor, I don't get any money from Raptor, and I paid retail; I'm just a fairly intransigent PowerPC bigot who is willing to put my Visa card where my mouth is.

Currently on its way to my doorstep is a two-CPU, octocore (each core is SMT-4, so that's 32 threads) Sforza POWER9 Talos II with 32GB of DDR4 ECC RAM, an AMD Radeon Pro WX7100, a 500GB NVMe SSD and an LSI 9300 8-port internal SAS controller. The system comes standard with a case, eight SAS/SATA bays, EATX motherboard, fans for each CPU, dual 1400W redundant PSUs, USB 3.0 and 2.0, RS-232, VGA, Blu-ray optical drive, dual Gigabit Ethernet, five PCIe slots (PCIe 4.0, 3 x16 and 2 x8) and a recovery disc. It runs Linux on ppc64le, which is fully supported. The total cost shipped to my maildrop with a hex driver for the high-speed fan assemblies is $7236.

Now, some of you are hyperventilating by now and a few of you may have gone into frank sticker shock. Before you reach for the Xanax, please remember this is most assuredly not a commodity x86_64 machine; this is a different (and Power ISA successor) architecture with fully auditable firmware, the ability for you to do your own upgrades and service with off-the-shelf parts, and no binary blobs with hidden spies like the Intel Management Engine. This is a niche box for people like us who value alternative architectures, especially in a design that we can build and trust ourselves, and I always said something like this wouldn't come cheap. But let's compare and say you're in the market for a Mac Pro or something. You'll still be paying a lot, especially if you get any of the tasty BTO options, and the next Mac Pro is still months away or more. And if you were actually in the market for an AmigaOne X5000, this blows it out of the water. You could just run UAE on this and have cycles to spare!

When the Talos II arrives, I'll be sure to post some unboxing photos and take it through its paces on first boot and give you some initial impressions. My immediate goal is to get a RAID set up, get QEMU able to run a decent subset of my old Mac software (I'll probably start with OS 9, and then create a Tiger instance or clone the G5 to it), and get Firefox running with compiler settings appropriate to the CPU. Then will come the real fun of writing a JavaScript JIT for POWER9.

But don't worry: the G5 isn't going anywhere and neither is TenFourFox. I've got a lot invested in this Quad and it will still be serving workstation duty for awhile yet. Nevertheless, get your credit card and your intestinal fortitude out in the meantime and reserve a Talos of your own while the pre-order period is open. Time to get in while it's hot. This is the next evolutionary step in personal computing with PowerPC.

As we wind up our discussion of the future, however, one part of the past will soon be almost completely gone: the venerable old <isindex> HTML tag. Firefox will be removing it from 56 for technical reasons after it was already removed from Google Chrome and the Safari preview. This construct dates back to the very earliest days of the Web when early browsers didn't have form support; it was designed as an easy way of enabling the user to send search keywords or parameters to a webserver, much like Gopher servers receive queries over item type 7. Mosaic 1.x even had a little form that was a permanent part of the browser chrome with a search button, as you can see from the screenshot at the Macintosh Repository, which would be activated when the tag was seen. Later on, subsequent versions of Mosaic and most of the successor browsers turned it into a pseudo-form that functioned the same way as far as the server is concerned and some of those sites are still around. Myself I use the tag mostly as a convenience for old browsers and Lynx on the Hytelnet-HTTP gateway; the search system offers both a conventional search form and an <isindex> query, both of which work the same, and both of which can still be seen in 52ESR, 54 and the 55 beta for the time being. It goes without saying that I will not be removing it from TenFourFox, and it will eternally remain in our codebase and on my servers as a relic of the way things were and an echo of the way the early Web was.

Michael VerdiNew download and install flow for Firefox 55

It’s been quite a while (January!) since I posted an update about the onboarding work we’ve been doing. If you’ve been using Nightly or read any of the Photon Engineering newsletters, you may have seen the new user tour we’re building but onboarding encompasses much more than that and we shipped some important pieces in Firefox 55 today.

The experiment we ran back in February (along with a follow up in May) went really well*. We had 4 important successes:

  1. The changes to the installer resulted in 8% more installs (that’s unheard of!).
  2. We retained 2.4% more of the people who went through our new experience. (combined with the installer change that means 10.6% more people using Firefox).
  3. Ratings for the new flow were on par with ratings of the existing flow. In addition, in user research, participants responded positively to the art on the new download page and installer and some were delighted by the animation on the firstrun page.

     
    I thought it was really cute. Especially the little sunrise at the beginning. That was precious. I thought it was kind of ingenious. It kind of implied that you’re using a product that’s pulling you into the light. Something like that. It was a cute little interactive feature which I really enjoyed.
    – Research participant

  4. Changing the /firstrun page to a sign in flow instead of a sign up flow resulted in a 14.8% increase in people ending up with second device connected to sync (which is the whole point of sync).

So today with Firefox 55 we shipped a new streamlined installer, we moved the default browser ask to the second session and we now open the privacy notice in a second tab instead of displaying a bottom notification bar. These changes join the new download and firstrun pages that shipped 2 weeks ago.

Here’s a quick video of Firefox 55 in action.


Planet Mozilla viewers – you can watch this video on YouTube (1 min.).

It is not an easy feat to build a whole new flow that cuts a swath across internal organizations and I’m incredibly proud of the work our team did to get here. And there’s a lot more to come (like that new user tour) that I’ll outline in another post.

*We weren’t able to properly test the automigration feature (automatically importing your stuff from another browser) back in February because of underlying performance issues that we discovered in the migration tool. We fixed many of the performance issues with migration but a subsequent test revealed that they haven’t all been fixed. Sadly, in a flow where we do this silently, some people just experiences a janky, slow Firefox. So we’re not going to ship automigration for now and instead we’re going to replace the modal import wizard on startup with a non-modal message embedded in Activity Stream beginning in Firefox 57.

The Mozilla BlogThe Mozilla Information Trust Initiative: Building a movement to fight misinformation online

Today, we are announcing the Mozilla Information Trust Initiative (MITI)—a comprehensive effort to keep the Internet credible and healthy. Mozilla is developing products, research, and communities to battle information pollution and so-called ‘fake news’ online. And we’re seeking partners and allies to help us do so.

Here’s why.

Imagine this: Two news articles are shared simultaneously online.

The first is a deeply reported and thoroughly fact checked story from a credible news-gathering organization. Perhaps Le Monde, the Wall Street Journal, or Süddeutsche Zeitung.

The second is a false or misleading story. But the article is designed to mimic content from a credible newsroom, from its headline to its dissemination.

How do the two articles fare?

The first article—designed to inform—receives limited attention. The second article—designed for virality—accumulates shares. It exploits cognitive bias, belief echos, and algorithmic filter bubbles. It percolates across the Internet, spreading misinformation.

This isn’t a hypothetical scenario—it’s happening now in the U.S., in the U.K., in France, in Germany, and beyond. The Pope did not endorse a U.S. presidential candidate, nor does India’s 2000-rupee note contain a tracking device. But fabricated content, misleading headlines, and false context convinced millions of Internet users otherwise.

The impact of misinformation on our society is one of the most divisive, fraught, and important topics of our day. Misinformation depletes transparency and sows discord, erodes participation and trust, and saps the web’s public benefit. In short: it makes the Internet less healthy. As a result, the Internet’s ability to power democratic society suffers greatly.

This is why we’re launching MITI. We’re investing in people, programs, and projects that disrupt misinformation online.

Why Mozilla? The spread of misinformation violates nearly every tenet of the Mozilla Manifesto, our guiding doctrine. Mozilla has a long history of putting community and principles first, and devoting resources to urgent issues—our Firefox browser is just one example. Mozilla is committed to building tolerance rather than hate, and building technology that can protect individuals and the web.

So we’re drawing on the unique depth and breadth of the Mozilla Network—from journalists and technologists to policymakers and scientists—to build functional products, research, and community-based solutions.

Misinformation is a complex problem with roots in technology, cognitive science, economics, and literacy. And so the Mozilla Information Trust Initiative will focus on four areas:

Product

Mozilla’s Open Innovation team will work with like-minded technologists and artists to develop technology that combats misinformation.

Mozilla will partner with global media organizations to do this, and also double down on our existing product work in the space, like Pocket, Focus, and Coral. Coral is a Mozilla project that builds open-source tools to make digital journalism more inclusive and more engaging.

Literacy

We can’t solve misinformation with technology alone—we also need to educate and empower Internet users, as well as those leading innovative literacy initiatives.

Mozilla will develop a web literacy curriculum that addresses misinformation, and will continue investing in existing projects like the Mission: Information teaching kit.

Research

Misinformation in the digital age is a relatively new phenomenon. To solve such a thorny problem, we first need to fully understand it.

Later this year, Mozilla will be releasing original research on how misinformation impacts users’ experiences online. We will be drawing on a dataset of user-level browsing data gathered during the 2016 U.S. elections.

Creative interventions

Mozilla will field and fund pitches from technologists who are combatting misinformation using various mediums, including virtual reality and augmented reality. It’s an opportunity to apply emerging technology to one of today’s most pressing issues.

Imagine: an augmented reality web app that uses data visualization to investigate misinformation’s impact on Internet health. Or, a virtual reality experience that takes users through the history of misinformation online.

Mozilla will also support key events in this space, like Media Party Argentina, the Computation+Journalism Symposium, the Online News Association, the 22×20 summit, and a MisinfoCon in London as part of MozFest. (To learn more about MozFest — Mozilla’s annual, flagship event devoted to Internet health — visit mozillafestival.org.)

We’re hoping to hear from and work with partners who share our vision. Please reach out to Phillip Smith, Mozilla’s Senior Fellow on Media, Misinformation & Trust, at miti@mozilla.com to get involved.

More than ever, we need a network of people and organizations devoted to understanding, and combatting, misinformation online. The health of the Internet — and our societies — depends on it.

The post The Mozilla Information Trust Initiative: Building a movement to fight misinformation online appeared first on The Mozilla Blog.

Princi VershwalGetting into Outreachy : An open source internship program.

What is Outreachy?

Outreachy is a wonderful initiative for women and people from groups underrepresented in free and open source software to get involved. If you are new to open source and searching for an internship that can boost up your confidence in open source, Outreachy would be a great start for you.

Outreachy interns work on a project for an organization under the supervision of a mentor, for 3 months. Various open source organizations(e.g. Mozilla, GNOME, Wikimedia, Linux kernel to name a few) take part in the Outreachy program. It is similar to the Google Summer of Code program, but a difference is that participation isn’t limited to just students.

Another major difference is that it happens twice a year. There are both summer and winter rounds. So you don’t have to wait for the entire year but you can start contributing anytime and prepare for the next round which would be some months later.

My involvement with Outreachy

Before Outreachy, I had done web and android development projects in my college but I was new to the huge world of open source. My first encounter with open source was in November last year and at that time Outreachy was nowhere in my mind.

I heard about the program in through a college senior who has earlier participated in Outreachy. I decided to participate in the coming round and started solving good-first-bugs.

The application period itself gave me a lot of confidence in my skills and work as a developer. I enjoyed it so much that I used to spend my whole day solving bugs here and there or just reading blogs about the program or the participating organizations.

Finally, there was the result day and I was selected for an internship at Mozilla for round 14.

I am currently working on Push Notifications for Signin Confirmation in Firefox Accounts. I am really enjoying my work. It is super exciting!!

Applying for Outreachy?

If you are planning to apply for the next round of Outreachy, here’s some advice that I can offer:

Start early

It is always better to know what is coming up. Try to explore as much as you can before the organizations and projects are announced. If you are a beginner, read about Outreachy, previously participated organizations, and start making contributions. You will learn a lot while contributing.

Chose your project/organization wisely

Once the organizations are announced you will be having about 50 projects(from different organization, programming languages, and fields) to choose from, this is great because you can start contributing to the project you are most interested in.

Explore all the projects and choose one which interests you the most and you feel motivated to work on that project for the next 3–4 months.

Ask Questions

Do not hesitate in asking questions even if you think that the question is silly because that one small question can be a block for many days, first search for the solution yourself but if it takes more than a day or two just ask. Outreachy respects the fact that you might be a beginner and everybody is going to respond to your query respectfully.

If it is an issue/project related doubt ask the mentors, otherwise for any Outreachy related query you can join the #outreachy channel on IRC.

Stay consistent

There can be days when you face block after block but stay motivated and don’t stop trying. Don’t get disheartened if your patches are not accepted in early stages. Eventually they would be. They just need a little more polishing. Keep going and one day you will get your PR merged!! :)

Be respectful

Always be respectful to your mentors and co participants while communicating. If you see that any fellow participant is stuck on a similar doubt and you feel that you can help, just share your knowledge even if he/she is your competitor. Getting selected is a goal but spreading knowledge and involving more people in open source is a bigger aim of Outreachy.

Know your project before submitting an application

You do not have to hurry about submitting a proposal. Get to know about your project, set up the platform, solve bugs and once you get comfortable with the code and platform then submit the application. This way you will have a better idea about the project and this will reflect from your application.

Don’t get disheartened and learn from the past mistakes

If you do not get selected for one round of Outreachy don’t be upset. Keep in mind that the next round is just a few months away and the chances of you getting selected in the next round will just get double if you keep contributing.

If you have any other query regarding Outreachy feel free to drop me an email at vershwal.princi@gmail.com.
Happy coding!!

Hacks.Mozilla.OrgFirefox 55: first desktop browser to support WebVR

WebVR Support on Desktop

Firefox on Windows is the first desktop browser to support the new WebVR standard (and macOS support is in Nightly!). As the originators of WebVR, Mozilla wanted it to embody the same principles of standardization, openness, and interoperability that are hallmarks of the Web, which is why WebVR works on any device: Vive, Rift, and beyond.

To learn more, check out vr.mozilla.org, or dive into A-Frame, an open source framework for building immersive VR experiences on the Web.

New Features for Developers

Firefox 55 supports several new ES2017/2018 features, including async generators and the rest/spread (“...“) operator for objects:

let a = { foo: 1, bar: 2 };
let b = { bar: 'two' };
let c = { ...a, ...b }; // { foo: 1, bar: 'two' };

MDN has great documentation on using ... with object literals or for destructuring assignment, and the TC39 proposal also provides a concise overview of this feature.

Over in DevTools, the Network panel now supports filtering results with queries like “status-code:200“.

Screenshot showing the Firefox DevTools' Network panel with a filter on status-code:304, and a pop-up showing the new columns that are available.

There are also new, optional columns for cookies, protocol, scheme, and more that can be hidden or shown inside the Network panel, as seen in the screenshot above.

Making Firefox Faster

We’ve implemented several new features to keep Firefox itself running quickly:

  • New installations of Firefox on Windows will now default to the more stable and secure 64-bit version. Existing installations will upgrade to 64-bit with our next release, Firefox 56.
  • Restoring a session or restarting Firefox with many tabs open is now an order of magnitude faster. For reasons unknown, Dietrich Ayala has a Firefox profile with 1,691 open tabs. With Firefox 54, starting up his instance of Firefox took 300 seconds and 2 GB of memory. Today, with Firefox 55, it takes just 15 seconds and 0.5 GB of memory. This improvement is primarily thanks to the tireless work of an external contributor, Kevin Jones, who virtually eliminated the fixed costs associated with restoring tabs.
  • Users can now adjust Firefox’s number of content processes from within Preferences. Multiple content processes debuted in Firefox 54, and allow Firefox to take better advantage of modern, multi-core CPUs, while still being respectful of RAM utilization.
  • Firefox now uses its built-in Tracking Protection lists to identify and throttle tracking scripts running in background pages. After a short grace period, Firefox will increase the minimum setInterval or setTimeout for callbacks scheduled by tracking scripts to 10 seconds while the tab is in the background. This is in addition to our usual 1 second throttling for background tabs, and helps ensure that unused tabs can’t invisibly ruin performance or battery life. Of course, tabs that are playing audio or video are not throttled, so music in a background tab won’t stutter.
  • With the announcement of Flash’s end of life, and in coordination with Microsoft and Google, Firefox 55 now requires users to explicitly click to activate Flash on web pages as we work together toward completely removing Flash from the Web platform in 2020.

Making the Web Faster

Firefox 55 introduces several new low-level capabilities that help improve the performance of demanding web applications:

See the Pen Hello IntersectionObserver by Dan Callahan (@callahad) on CodePen.

  • SharedArrayBuffer and Atomics objects are new JavaScript primitives that allow workers to share and simultaneously access the same memory. This finally makes efficient multi-threading a reality on the Web. The only downside? Developers have to care about thread safety, mutexes, etc. when sharing memory, just like in any other multi-threaded language. You can learn more about SharedArrayBuffer in this code cartoon introduction and this explainer article from last year.
  • The requestIdleCallback() API offers a new way to schedule callbacks whenever the browser has a few extra, unused milliseconds between frames, or whenever a maximum timeout has elapsed. This makes it possible to squeeze work into the margins where the browser would otherwise be idle, and to defer lower priority work while the browser is busy. Using this API requires a bit of finesse, but MDN has great documentation on how to use requestIdleCallback() effectively.

Making the Web More Secure

Geolocation and Storage join the ranks of powerful APIs like Service Workers that are only allowed on secure, https:// origins. If your site needs a TLS certificate, consider Let’s Encrypt: a completely free, automated, and non-profit Certificate Authority.

Additionally, Firefox 55 will not allow plug-ins to load from or on non-HTTP/S schemes, such as file:.

New WebExtension APIs

WebExtensions can now:

And more…

There are many more changes in the works as we get ready for the next era of Firefox in November. Some users of Firefox 55 will begin seeing our new Firefox Screenshots feature, the Bookmarks / History sidebar can now be docked on either side of the browser, and we just announced three new Test Pilot experiments.

For a complete overview of what’s new, refer to the official Release Notes, MDN’s Firefox 55 for Developers, and the Mozilla Blog announcement .

The Mozilla BlogFirefox Is Better, For You. WebVR and new speedy features launching today in Firefox

Perhaps you’re starting to see a pattern – we’re working furiously to make Firefox faster and better than ever. And today we’re shipping a new release that’s our best yet, one that introduces exciting, empowering new technologies for creators as well as improves the everyday experience for all Firefox users.

Here’s what’s new today:

WebVR opens up a whole new world for the WWW

On top of Firefox’s new super-fast multi-process foundation, today we’re launching a breakthrough feature that expands the web to an entirely new experience. Firefox for Windows is the first desktop browser to support WebVR for all users, letting you experience next-generation entertainment in virtual reality.

WebVR enables developers and artists to create web-based VR experiences you can browse to with Firefox. So whether you’re a current Oculus Rift or HTC Vive owner – or still deciding when you’re going to take the VR leap – Firefox can get you to your VR fix faster. Once you find a web game or app that supports VR, you can experience it with your headset just by clicking the VR goggles icon visible on the web page. You can navigate and control VR experiences with handset controllers and your movements in physical space.

For a look at what WebVR can do, check out this sizzle reel (retro intro intended!).

If you’re ready to try out VR with Firefox, a growing community of creators has already been building content with WebVR. Visit vr.mozilla.org to find some experiences we recommend, many made with A-Frame, an easy-to-use WebVR content creation framework made by Mozilla.. One of our favorites is A Painter, a VR painting experience. None of this would have been possible without the hard work of the Mozilla VR team, who collaborated with industry partners, fellow browser makers and the developer community to create and adopt the WebVR specification. If you’d like to learn more about the history and capabilities of WebVR, check out this Medium post by Sean White.

Performance Panel – fine-tune browser performance

Our new multi-process architecture allows Firefox to easily handle complex websites, particularly when you have many of them loaded in tabs. We believe we’ve struck a good balance for most computers, but for those of you who are tinkerers, you can now adjust the number of processes up or down in this version of Firefox. This setting is at the bottom of the General section in Options.

Tip: if your computer has lots of RAM (e.g., more than 8GB), you might want to try bumping up the number of content processes that Firefox uses from its default four. This can make Firefox even faster, although it will use more memory than it does with four processes. But, in our tests on Windows 10, Firefox uses less memory than Chrome, even with eight content processes running.

Faster startup when restoring lots of tabs

Are you a tab hoarder? As part of our Quantum Flow project to improve performance, we’ve significantly reduced the time it takes to start Firefox when restoring tabs from a previous session. Just how much faster are things now? Mozillian Dietrich Ayala ran an interesting experiment, comparing how long it takes to start various versions of Firefox with a whopping 1,691 tabs open. The end result? What used to take nearly eight minutes, now takes just 15 seconds.

A faster and more stable Firefox for 64-bit Windows

If you’re running the 64-bit version of Windows (here’s how to check), you might want to download and reinstall Firefox today. That’s because new downloads on 64-bit Windows will install the 64-bit version of Firefox, which is much less prone to running out of memory and crashing. In our tests so far, the 64-bit version of Firefox reduces crashes by 39% on machines with 4GB of RAM.

If you don’t manually upgrade, no worries. We intend to automatically migrate 64-bit Windows users to 64-bit Firefox in our next release.

A faster way to search

We’re all searching for something. Sometimes that thing is a bit of information – like a fact you can glean from Wikipedia. Or, maybe it’s a product you hope to find on Amazon, or a video on YouTube.

With today’s Firefox release, you can quickly search using many websites’ search engines, right from the address bar. Just type your query, and then click which search engine you’d like to use.

Out of the box, you can easily search with Yahoo, Google, Bing, Amazon, DuckDuckGo, Twitter, and Wikipedia. You can customize this list of search engines in settings.

Even more

Here are a few more interesting improvements shipping today:

  • Parts of a web page that use Flash must now be clicked and given permission to run. This improves battery life, security, and stability, and is a step towards Flash end-of-life.
  • You can now move the sidebar to the right side of the window.
  • Firefox for Android is now translated in Greek and Lao.
  • Simplify print jobs from within print preview.

As usual, you can see everything new in the release notes, and developers can read about new APIs on the Mozilla Hacks Blog.

We’ll keep cranking away – much more to come!

 

 

The post Firefox Is Better, For You. WebVR and new speedy features launching today in Firefox appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 194

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate is aesni, a crate providing a Rust AES (Rijndael) block ciphers implementation using AES-NI. Thanks to newpavlov for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

105 pull requests were merged in the last week

New Contributors

  • Eric Daniels
  • Mario Idival
  • Ryan Leckey
  • scalexm
  • Tobias Schaffner
  • Tymoteusz Jankowski

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

The RFC style is now the default style in Rustfmt - try it out and let us know what you think!

Currently being discussed:

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Nah, it's not you, it's the borrow checker.

Honey, it's not you, it's &mut me.

You can borrow me, and you can change me, but you can't own me.

/u/staticassert, /u/ybx, and /u/paholg on reddit.

Thanks to Matt Ickstadt and QuadDamaged for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Mozilla Marketing Engineering & Ops BlogMozMEAO SRE Status Report - August 8, 2017

Here’s what happened on the MozMEAO SRE team from August 1st - August 8th.

Current work

MDN Migration to AWS

Virginia and EUW cluster decommissioning

Upcoming Portland Deis 1 cluster decommissioning

The Deis 1 cluster in Portland is tentatively scheduled to be decommissioned next week.

Links

Mozilla Marketing Engineering & Ops BlogKuma Report, July 2017

Here’s what happened in July in Kuma, the engine of MDN Web Docs:

  • Shipped the new design to all users
  • Shipped the sample database
  • Shipped tweaks and fixes

Here’s the plan for August:

  • Continue the redesign and interactive examples
  • Update localization of macros
  • Establish maintenance mode in AWS

Done in July

Shipped the New Design to All Users

In June, we revealed the new MDN web docs design to beta testers. In July, Stephanie Hobson and Schalk Neethling fixed many bugs, adjusted styles, shipped the homepage redesign, and answered a lot of feedback. The new design was shipped to all MDN Web Docs users on July 25, and the old design files were retired.

The redesign was a big change, with some interesting problems that called for creative solutions. For details, see Stephanie’s blog post, The MDN Redesign “Behind the Scenes”.

Shipped the Sample Database

The sample database project, started in May 2016, finally shipped in July.

Data is an important part of Kuma development. With the code and backing services you get the home page, and not much else. To develop features or test changes, you often need wiki pages, historical revisions, waffle flags, constance settings, tags, search topics, users and groups. Staff developers could download a 2 GB anonymized production database, wait 30 minutes for it to load, and then they would have a useful dev environment. Contributors had to manually copy data from production, and usually didn’t bother. The sample database has a small but representative data set, suitable for 90% of development tasks, and takes less than a minute to download and install.

The sample database doesn’t have all the data on MDN, to keep it small. There are now scraping tools for adding more production data to your development database. This is especially useful for development and testing of KumaScript macros, which often require specific pages.

Finally, integration testing is challenging because non-trivial testing requires some known data to be present, such as specific pages and editor accounts. Now, a testing deployment can combine new code with the sample database, and automated browser-based tests can verify new and old functionality. Some tests can change the data, and the sample data can be reloaded to a known state for the next test.

Shipped Tweaks and Fixes

There were many PRs merged in July:

Some highlights:

Planned for August

Continue the redesign and the interactive examples

We’ve established the new look-and-feel of MDN on the homepage and article pages, and will continue to tweak the design for corner cases and bugs. For the next phase, we’ll look at the content of article pages, and consider better ways to display information and to navigate within and between pages. It is harder to change these aspects than global headers and footers, so it may be a while before you see the fruits of this design process.

Work continues on the interactive examples. They have gone through several review and bug fix cycles, and have a working production deployment system. There’s been interest and work to enable contributions (Issue 99). In August, we’ll launch user testing, and enable the new examples for beta testers. See the projects page for the remaining work.

Update Localization of Macros

Currently, KumaScript macros use in-macro localization strings and utility functions like getLocalString to localize output for three to five languages. Meanwhile, user interface strings in Kuma are translated in Pontoon into 57 languages. We’d like to use a similar workflow for strings in macros.

In August, we’ll assemble the toolchain for localizing strings at render time, and for extracting the localizable strings for translation in Pontoon. Converting the macros to use localizable strings will be a long process, but there’s a lot of community interest in translations, so we should get some help.

Establish Maintenance Mode in AWS

Over the past 12 months, we’ve made some changes to MDN development, such as switching to a Docker development environment, moving Kumascript macros to Github, and getting our browser-based integration tests working. There are benefits to each of these, but they were chosen because they move us closer to our long term goal of serving MDN from AWS. We’ve slowly filled out our tech tree from our AWS plan:

AWS Plan, July 2017

In August, we plan to prepare a maintenance mode deployment in AWS, and send some production traffic to it. This will allow us to model the resources needed when the production environment is hosted in AWS. It will also keep MDN data available while the production database is transferred, when we finalize the transition.

Hacks.Mozilla.OrgWebVR for All Windows Users

With the release of Firefox 55 on August 8, Mozilla is pleased to make WebVR 1.1 available for all 64-bit Windows users with an Oculus Rift or HTC VIVE headset. Since we first announced this feature two months ago, we’ve seen tremendous growth in the tooling, art content, and applications being produced for WebVR – check out some highlights in this showcase video:

Sketchfab also just announced support for exporting their 3D models into the glTF format and have over 100,000 models available for free download under Creative Commons licensing, so it’s easier to bring high-quality art assets into your WebVR scenes with libraries such three.js and Babylon.js and know that they will just work.

They are also one of the first sites to take advantage of WebVR to make an animated short and highlight the openness of URLs to support link traversal to build awesome in-VR experiences within web content.

The growth in numbers of new users having their first experiences with WebVR content has been phenomenal as well. In the last month, we have seen over 13 million uses of the A-Frame library, started here at Mozilla to make it easier for web developers, designers and people of all backgrounds to create WebVR content.

We can’t wait to see what you will build with WebVR. Please show off what you’re doing by tweeting to @MozillaVR or saying hi in the WebVR Slack.

Stay tuned for an upcoming A-Frame contest announcement with even more opportunities to learn, experiment, and get feedback!

Mic BermanWhat do you want for you life? knowing oneself

Roman-mosaic-know-thyself
Socrates

 

There are many wiser than me that have offered knowing yourself as a valuable pursuit that brings great rewards.

 

Here are a few of my favourite quotes on why to do this:

 

This is how I invest in knowing myself - I hope it inspires you to create your own practice

  1. I spend time understanding my motivations, values in action or inaction, and my triggers. I leverage my coach to deconstruct situations that were particularly difficult or rewarding, where I’m overwhelmed by emotion and don’t feel I can think rationally - I check in to get crystal clear on what is going on, how am I feeling, what was the trigger(s) and how will I be at choice in action going forward.
  2. I challenge myself in areas I want to understand more about myself by reading, going to lectures, sharing honestly with leanred or experienced friends.
  3. I keep a daily journal - particularly around areas in my life I want to change or improve like being on time and creating sufficient time in my day for reflection. I’ve long run my calendar to ‘maximize time and service’ i.e. every available minute is in meetings, working on a project, etc. This is not only un-sustainable for me, it doesn’t leave me any room for the unexpected and more importantly an opportunity to reflect on what may have just happened or prepare for what or who I may be seeing next. This is not fair to me nor to the people I work with.

Mic BermanHow are you taking care of yourself?

The leaders I coach drive themselves and their teams to great achievements, are engaged in what they do, love their work and have passion and compassion in how they work for their teams and customers. They face tough situations - impossible-seeming deadlines or goals, difficult conversations, constant re-balancing of work-life priorities, and crazy business scenarios we’ve never faced before.

Their days can be both energizing and completely draining. And each day they face those choices and predicaments at times with full grace and others with total foolishness.

Along the way I hear and offer the questions - how are you taking care of yourself? how will you rejuvenate? how will you maintain balance? so you I ask these questions of the leaders I work with so that they can keep driving their goals, over-achieving each day and showing up for the important people in your life :)

 

I focus on three ways to do this myself.

  • Knowing myself - spending time to understand and check in with my values, triggers, and motivations.

  • Doing a daily practice - i’ve created a daily and weekly practice that touches on my mind, body and spirit. this discipline and evolving practice keeps me learning, present and ‘in balance’

  • Being discerning about my influences - choose the people, experiences and beauty that influence my life and what’s important about that today, this week or month or year.

Shing LyuPorting Chrome Extension to Firefox

Edit: Andreas from the Mozilla Add-on team points out a few errors, I’ll keep them here before I can inline them into the post:

  • Do NOT create a new version of the extension on AMO, upload and replace your legacy extension using the same listing.
  • The user drop is related to https://blog.mozilla.org/addons/2017/06/21/upcoming-changes-usage-statistics/
  • The web-ext run should work without an ID
  • strict_min_version is not mandatory

Three years ago, I wrote the FocusBlocker to help me focus on my master thesis. It’s basically a website blocker that stops me from checking Facebook every five minute. But is different from other blockers like LeechBlock that requires you to set a fixed schedule. FocusBlocker lets you set a quota, e.g. I can browse 10 minutes of Facebook then block it for 50 minutes. So as long as you have remaining quota, you can check Facebook anytime. I’m glad that other people find it useful, and I even got my first donation through AMO because of happy users.

Since this extension serves my need, I’m not actively maintaining it or adding new features. But I was aware of Firefox’s transition from the legacy Add-on SDK to WebExtension API. So before WebExtension API is fully available, I started to migrate it to Chrome’s extension format. But I didn’t got the time to actually migrate it back to Firefox, until a user emails me asking for a WebExtension version. I looked into the statistics, the daily active user count drops from ~1000 to ~300. That’s when I rolled up my sleeve and actually migrated it in one day. Here is how I did it and what I’ve learned from the process.

daily_user.png

What needs to be changed

To evaluate the scope of the work. We need to first look at what APIs I used. The FocusBlocker Chrome version uses the three main APIs:

  • chrome.tabs: to monitor new tabs opening and actually block existing tabs.
  • chrome.alarm: Set timers for blocking and unblocking.
  • chrome.storage.sync: To store the settings and persist the timer across browser restarts.

It’s nice that these APIs are all supported (at least the parts I used) in Firefox, so I don’t really need to modify any JavaScript code.

I loaded the manifest directly in Firefox’s about:debugging page (you can also consider use the convenient web-ext command line tool), but Firefox rejects it.

about_debugging.png

That’s because Firefox requires you to set a unique id for each extension (you can read more about the id requirement here), and you must set a minimal version of Firefox on which the extension works, like so:

"applications": {
  "gecko": {
    "id": "focusblocker@shing.lyu",
    "strict_min_version": "48.0"
  }
},

There is one more modification need. In my Chrome extension I used the old options_page setting setting to set the preference page. But Firefox only support the newer options_ui. You can also apply browser’s system style for your settings page, so the UI looks like part of the Firefox setting. Firefox generalized the name from chrome_style to browser_style. So this is what I need to add to my manifest.json file (and remove the options_page setting):

"options_ui": {
  "page": "options.html",
  "browser_style": true
},

about_addon.png browser_style.png

That’s all I need to port the extension from Chrome to Firefox. Super easy! The WebExtension team really did a good job on making the extensions compatible. In case you are curious, you can find the full source code of focusblocker on GitHub.

Publishing the extension on AMO

To publish the extension on addons.mozilla.org, you need to zip all the files in a zip and upload it. Here are some tips for passing the review more easily.

  • You can’t just upload a WebExtension-API-backed extension to replace your already-listed legacy extension, so please create a new listing.
  • Don’t pack any unnecessary file into the zip, exclude all the temporary test files from the zip.
  • Remove or comment out all the console.log() calls. Although it’s not a strict requirement, but it will make the review process much smoother.
  • If you use any third party library, consider including (i.e. “vendoring”) the file into the zip, or at least upload the source for review.
  • If you’ve upload one version and you’d like to make some modifications or fix, you need to bump the version number, no matter how small the change is.

Firefox is planning to completely roll out the new format in version 57 (around November, 2017). So if you have a legacy Firefox extension, or a Chrome extension you want to convert, now is a perfect timing.

If you want to try out the new FocusBlocker, please head to the install page. You can also find the Chrome version here.

Robert O'CallahanStabilizing The rr Trace Format With Cap’n Proto

In the past we've modified the rr trace format quite frequently, and there has been no backward or forward compatibility. In particular most of the time when people update rr — and definitely when updating between releases — all their existing traces become unreplayable. This is a problem for rr-based services, so over the last few weeks I've been fixing it.

Prior to stabilization I made all the trace format updates that were obviously already desirable. I extended the event counter to 64 bits since a pathological testcase could overflow 2^31 events in less than a day. I simplified the event types to eliminate some unnecessary or redundant events. I switched the compression algorithm from zlib to brotli.

Of course it's not realistic to expect that the trace format is now perfect and won't ever need to be updated again. We need an extensible format so that future versions of rr can add to it and still be able to read older traces. Enter Cap’n Proto! Cap’n Proto lets us write a schema describing types for our trace records and then update that schema over time in constrained ways. Cap’n Proto generates code to read and write records and guarantees that data using older versions of the schema is readable by implementations using newer versions. (It also has guarantees in the other direction, but we're not planning to rely on them.)

This has all landed now, so the next rr release should be the last one to break compatibility with old traces. I say should, because something could still go wrong!

One issue that wasn't obvious to me when I started writing the schema is that rr can't use Cap’n Proto's Text type — because that requires text be valid UTF-8, and most of rr's strings are data like Linux pathnames which are not guaranteed to be valid UTF-8. For those I had to use the Data type instead (an array of bytes).

Another interesting issue involves choosing between signed and unsigned integers. For example a file descriptor can't be negative, but Unix file descriptors are given type int in kernel APIs ... so should the schema declare them signed or not? I made them signed, on the grounds that we can then check while reading traces that the values are non-negative, and when using the file descriptor we don't have to worry about the value overflowing as we coerce it to an int.

I wrote a microbenchmark to evaluate the performance impact of this change. It performs 500K trivial (non-buffered) system calls, producing 1M events (an 'entry' and 'exit' event per system call). My initial Cap’n Proto implementation (using "packed messages") slowed rr recording down from 12 to 14 seconds. After some profiling and small optimizations, it slows rr recording down from 9.5 to 10.5 seconds — most of the optimizations benefited both configurations. I don't think this overhead will have any practical impact: any workload with such a high frequency of non-buffered system calls is already performing very poorly under rr (the non-rr time for this test is only about 20 milliseconds), and if it occurred in practice we'd buffer the relevant system calls.

One surprising datum is that using Cap’n Proto made the event data significantly smaller — from 7.0MB to 5.0MB (both after compression with brotli-5). I do not have an explanation for this.

Another happy side effect of this change is that it's now a bit easier to read rr traces from other languages supported by Cap’n Proto.

Cameron KaiserTenFourFox FPR2 available

As I type in what is not quite the worst hotel room in Mountain View while Rockford Files reruns play in the background, TenFourFox FPR2 final is available for testing (downloads, hashes, release notes). The original plan was not to have a Debug build with this release, but we're still trying to smoke out issue 72, so there is a Debug build as well. Again, it is not intended for general use unless you know what you're doing and why.

The only differences from this and the beta, besides the usual certificate, HPKP and HSTS updates, are some additional debug sections in the widget code for issue 72 and the remaining security and stability update backports. One of these updates fixes a bug in HTTP/2 transactions which helps reduce latency and dropped connections on some sites, notably many Google properties and some CDNs, and affects pretty much any version of Firefox since HTTP/2 support was added. As always, the plan is to go live on Monday PM Pacific.

Day 2 of the Vintage Computer Festival West is tomorrow! Be there, or, um, be not there! And that is clearly worse!

Smokey ArdissonSSL now available on ardisson.org

As of July 8, you can now visit all of ardisson.org,1 including this blog, using an encrypted connection (commonly known as “SSL” or “https”). Hooray!

For the moment I’m not making any effort to force everyone to the https URLs, and some pages (including, sadly, for the moment, any page on this blog that includes a post from before 2017 with images) will throw mixed-content warnings and/or fail to load images in modern browsers because there are images on the page being loaded via plain-old-HTTP—there’s much cleanup still to be done. But I encourage you to update your bookmarks, your feed subscriptions, and whatnot to replace http:// with https:// in order to communicate with ardisson.org in an encrypted, more secure fashion.

Some history

I’ve wanted to do this for years, but it has always been more costly than I could justify. Even as basic SSL certificate prices started to fall (my hosting provider, Bluehost, offered certificates from major Certificate Authorities for a couple of dollars a year), Bluehost only supported SSL certificates on dedicated servers, which ran an additional $10/month or so on top of what I was already paying them for hosting ardisson.org. Bluehost could have supported SSL on shared hosting by implementing SNI on their servers, but for years the company seemed unwilling to do so—presumably because it would cut into their forced-upgrade-to-dedicated-server revenue stream. For a hobbyist website that practically no one ever visits, the costs of a dedicated server (roughly doubling my annual hosting bill) just to implement SSL weren’t worth it.

Finally, though, something moved Bluehost to change; perhaps the arrival and meteoric ascent of Let’s Encrypt,2 which offered free, automatically installed-and-updated SSL certificates (at least with compatible hosting providers), or maybe WordPress’s announcement last December that they were going to stop promoting hosting partners who didn’t offer SSL certificates as part of a default hosting account (Bluehost was, at one point, one of WordPress’s hosting partners; I don’t know if that is still the case). Sometime earlier this year, though—I don’t when know exactly; I never got any notification!—Bluehost announced the availability of free SSL certificates for WordPress sites it hosts, initially using Let’s Encrypt before switching to Comodo.

Some notes on the process at Bluehost

When I discovered that news on July 7, I began investigating what I needed to do (after all, I have WordPress installed and in use). Without having gotten any guidance (or notice of availibility), I logged in to my account and went looking for the SSL Certificates page. I initially arrived at that page via the “addons” header link in my account, and at that point the page wasn’t going to request the certificate because it claimed I wasn’t using Bluehost nameservers—which wasn’t true. But I hopped over to the Domain Manager, clicked “save nameserver settings” (what is it about all of these all-lowercase link and button names?) without changing anything there, and in the process was prompted to (re)validate my Whois email address, which I did. I then returned to the SSL Certificates page and tried again, and the certificate request went through. I didn’t time the process, but it seems like it took somewhere between 15 and 30 minutes after the request submission for the certificate to be generated and installed.

Simple—other than jumping through the hoops caused by spurious failures, but at least the failure message provided a clue as to what I should check—and quick (it took far more time for me to draft, and especially finish up, this post!), and thus reasonably painless, and now ardisson.org is, after nearly a decade, finally available in an encrypted fashion. Hooray!

        

1 There are some random old Camino-testing-related subdomains running around; those are not SSL-enabled. Anything anyone would actually want to visit in 2017, however, is available over an encrypted connection. ↩︎
2 Old Camino users may recognize former developer Josh Aas as one of the people behind Let’s Encrypt and its parent, Internet Security Research Group. ↩︎

Wladimir PalantRevisiting permission prompt for Firefox extensions

Almost exactly a year ago I wrote a blog post explaining how permission prompts are a particularly problematic area for a functioning extension ecosystem. While at this point it was already clear that Firefox would show some kind of permission prompt, I hoped that Mozilla would put more thought into it than Chrome did. Unfortunately, this didn’t quite happen. In fact, as I now experienced, the permission prompt in Firefox turned out significantly worse than the one in Chrome.

Two days ago I released a new version of my Google search link fix extension. I finally got to turning that “run on all websites” permission into a list of specific domains, with all of 193 Google domains. And the backlash came immediately, in form of this review (translated from Russian):

“Google search link fix has been updated. You must approve new permissions before the updated version will install. Choosing “Cancel” you will maintain your current add-on version. It requires your permission to:

  • Access your data for sites in the yandex.com domain
    • Access your data for sites in the yandex.com.tr domain
    • Access your data for sites in the yandex.by domain
    • Access your data in 197 other domains”

Developers, re-read the name of your extension.

This prompt doesn’t show up on the stable Firefox release yet, but Firefox Nightly indeed shows it:

I guess that I must consider myself lucky for having implemented this change so early. A few months later I would have received lots of comments like that, as all users would have seen this prompt. As I explained in my previous blog post, permission prompts on update are particularly disruptive and should be avoided if somehow possible. However, Firefox is currently displaying them even if the extension’s permissions got reduced like in this case.

The other issue is the way the information is presented. I didn’t expect the order to matter so I put Google domains last. But that’s confusing to users if only three domains are being displayed, with Google Search being the primary target for this extension. Worse yet, with no way of listing the remaining domains users suspect that something malicious is going on.

It seems that the use case “run on various search pages” is common enough that Chrome developers chose to special-case it. The permission prompt displayed by Chrome is way more straightforward:

This also leaves me hoping that Chrome won’t display a permission prompt just because a future update added a new Google domain. Still questionable whether I want to add support for more search engines in future, but it probably won’t confuse users all too much.

As to Firefox, I’m considering re-adding https://* permission while I can still do it (meaning: most users won’t see the permission prompt on update). Otherwise future updates might turn out quite disastrous.

Mozilla Localization (L10N)Create a localized build locally

Yesterday we changed the way that you create localized builds on mozilla-central.

This works for developers doing regular builds, as well as developers or localizers without a compile environment. Sadly, users of artifact builds are not supported.

For language packs, a mere

./mach build langpack-de

will work. If you’d rather wish to build a localized package, you’ll want to get the package first. If you’re building yourself, that’s

./mach package

and if you want to get a Nightly build from archive.mozilla.org, just

./mach build wget-en-US

If you want to do that for Firefox for Android, you’ll need to specify which platform you want. Set EN_US_BINARY_URL to the latest-mozilla-central-* path for the binary you want to test.

And then you just

./mach build installers-fr

That’ll take care about getting the french l10n repository, and do all the necessary things to get you a nice little installer/package in dist. Pick your favorite language from our repositories. Care for a RTL build? ./mach installers-fa will get you a Persian one 😉 .

As with other repositories we clone into ~/.mozbuild, you’ll want to update those every now and then. They’re in l10n-central/*, a repository for each language you tried.

Documentation is on gecko.rtd, bugs go here. This works for Firefox, Firefox for Android, and Thunderbird.

And now you can safely forget all the things you never wanted to know about localized builds.

Justin DolskePhoton Engineering Newsletter #11

H*ck yeah, it’s time for another Photon newsletter! They now go to #11!

It’s Hip To Be Square

So… Perhaps you noticed something ever so slightly different in yesterday’s Nightly. Something less curvy, and more rectangular. Look closely, right there at the tabs. That’s right. No more curvy tabs!

austphoton

Behold, rectangular tabs! This is one of the last few major Photon features to be implemented.

tabs

We think most people will like the new tab shape. Some people won’t like them. That’s ok. We’ve done a lot of user testing, and have seen a lot of positive feedback on the Photon mockups since they first came out. And, of course, the Firefox Compact Light/Dark née DevTools themes have had square tabs for a long time. So while it’s a big change to a very prominent piece of UI, it’s also a change that’s a bit familiar, and really helps to make Firefox look clean and modern.

(There’s a little bit more change still to come with the tabs – we’re going to make them a little bit taller by default. This is being handled as a separate follow-up fix, because we discovered that this surprisingly breaks some of our automated tests. So while we’re fixing the tests, we wanted to get the bulk of this change landed.)

R.I.P., curvy tabs.

tabeol

Oh, and you may have also noticed – we updated all the navigation toolbar icons to the new Photon style. They’re lighter-weight than the old icons. We had been holding off on landing this until the start of Nightly-57, simply because it wasn’t worth the effort to add extra code to allow both the old and new icon sets to co-exist (since Nightly-56 would need to disable those icons when it became Beta-56). But now that Nightly is on the 57 train, which will be shipping Photon, we don’t need to worry about that.

Recent Changes

Menus/structure:

Animation:

  • Spent a good chunk of time tracking down a really weird layout issue with OSX 10.9 and Photon.
  • Made the overflow arrows point to the left in RTL builds.
  • Fixed a problem (by backing out the offending patch) where the hamburger menu and other arrow panels would fail to open with some Linux window managers.

Preferences:

Visual redesign:

Onboarding:

  • Made the speech bubble of the onboarding icon clickable.
  • Improved focus styling of the buttons in the tour.
    focusring
  • Working on adding illustrations for 57 tour.

Performance:

One More Thing

You see, something’s going to happen. Something wonderful…

We’ve got one more major visual change coming, which a small team has quietly been working on for quite some time. Even within Mozilla, most people haven’t seen it yet. It looks awesome, and I can’t wait for it to land! I think you’re really going to like it.

More soon. 😉