Chris CooperRelEng & RelOps Weekly highlights - August 29, 2015

Happy weekend, friends of releng!

It’s been a very release and ops heavy week for lots of people on our teams. We were heads down building multiple Firefox releases and dealing with some issues related to load on the Mozilla VCS infrastructure and issues due to infrastructure configuration changes. We’re also working jointly with IT, as we speak, to perform a number of maintenance tasks during our regular tree closing window.

Modernize infrastructure: Dustin and Anthony are working on TTL support in tooltool.

Q brought up another hand-built windows 10 machine to double the available pool size on try. At the same time Callek continues to coordinate with developers in his efforts to green up Windows 10 testing.

Q updated Microsoft Deployment Tools (MDT) to 2013 update 1. XP booting is still being worked on but we took a big step forward in being able to support Windows 10.

Rob determined why our AWS Windows spot instances weren’t spinning up and patched a few bugs in our cloud-tool and slavealloc configurations. Now we need to work on making sure builds function properly. One step closer to having Windows builds in AWS.

Improve CI pipeline: Dustin has been working on running linux builds in a docker image running the latest version of CentOS.

Ben has been working on Buildbot Taskcluster Bridge improvements:

  • Implemented payload schema verification
  • Correct resolve badly formed Tasks
  • Identified a number of key edge cases that need to be handled still

Rail has funsize reporting to production treeherder (https://bugzil.la/725726)

Rail got funsize to generate partial updates 30-50% faster after enabling local diffing cache.

Release: Jordan was point on handling multiple go to build requests and shipping important point releases (40.0.3, 38.2.1).

Operational: Dustin is working on a plan to migrate TreeStatus out of PHX1 and into relengapi in SCL3 as part of the PHX1 exit strategy.

Rail regenerated our git-shared tarball and increased the disk space on our linux64 AWS AMIs in effort to help with the load issues we’ve been experiencing recently on our internal git servers (https://bugzil.la/1199524).

See you again next week!

Chris CooperRelEng & RelOps Weekly highlights - August 23, 2015

Welcome to a double issue of our weekly highlights email, covering the last two action-packed weeks in RelEng and RelOps!

Modernize infrastructure:Rob debugged the spot deployment process and checked in changes to allow cloud tools to allocate windows spot instances: b-2008-spot (build) & y-2008-spot (try). Testing is ongoing to validate that these instances are capable of performing build work.

Greg and Jonas have been working closely with Treeherder folks and Ed Morley pushed a change that dramatically improves UX for Sheriffs looking at TC jobs.

Before: https://treeherder.allizom.org/logviewer.html#?job_id=10138081&repo=try

After: https://treeherder.allizom.org/logviewer.html#?job_id=10140320&repo=try

Load testing for separating auth into its own service for TaskCluster is complete. See Jonas for details.

Mike rolled out the new indexing to unify the routes between Taskcluster and Buildbot builds (https://bugzil.la/1133074)

Callek continues to make progress on getting windows 10 tests green (https://bugzil.la/1192842) on our infra. Special thanks to the developer support he has received thus far in getting the various issues addressed. At the same Q has been reworking our infra to support windows 10 in our deployment toolchains (which will enable us to bring up more windows 10 machines to begin to meet capacity needs).

Improve release pipeline: Ben and Rail made a breakthrough in how to test the new Release Promotion code, which will let us move faster and with more confidence on that project.

The Ship It/release runner development environment is now fully functional, which lets us easily do end to end testing of Release Promotion.

Improve CI pipeline: Callek disabled the jetpack tests running on try that have been broken since addon signing landed. (https://bugzil.la/1192318, https://bugzil.la/1192873)

Kim disabled Android 4.3 debug robocop tests on try were which were broken and hidden (https://bugzil.la/1194474)

Kim changed the instance type to c3.xlarge that Android 4.3 mochitests run on which will allow us to run media tests on emulators (https://bugzil.la/1195893)

Release: Released Firefox 40.0, 40.0.2, 41.0b1, 41.0b2, (we only built 40.0.1 not released it) for both Desktop and Android. We https://bugzil.la/1192842 also built Thunderbird 38.2.

Operational: Amy reimaged the remaining MacOSX 10.8 machines as 10.10 and Kim deployed them to our Yosemite pool which increased the size of this pool by 10% (https://bugzil.la/1173452). Kim also removed the remaining 10.8 configs from our code base (https://bugzil.la/1166311)

Rail was the bug (https://bugzil.la/1141416) that was preventing us from reimaging linux32 talos machines and Alin was able to reimage these machines and add them to our production pool.

Jordan performed the merge-day [uplift-day] config changes (https://bugzil.la/1178507)

Callek removed support for building esr31 now that it is EOL, allowing us to cleanup a bunch of now-obsolete complexity. (https://bugzil.la/1191349)

After discussions with the A-team and sheriffs, Kim increased the parameters for SETA so that tests that historically don’t identify regressions will only run on every 7th push or every 60 minutes on m-i and m-c. (https://bugzil.la/1195803). We hope to revisit this in a few weeks to increase it back to every 10th push and every 90 minutes. Thanks to Alice and Armen for writing mozci tools to allow the sheriffs to backfill jobs to avoid the backout issues on merges that SETA revealed when it was first implemented.

See you next week!

George RoterWhy Mozilla (for me)?

It’s official. I’m here at Mozilla for the indefinite future with a title of Head of Core Contributors, Participation. Basically, I’m responsible for enabling a team of volunteers and staff to grow the size and impact of our community of most-committed volunteer Mozillians.

As I considered this role, I asked myself: Why Mozilla? Of all of the places in the world that I can apply my energy and talents, why here? I wanted to share my answer (as of today):

The past 150 years has brought the greatest advances in freedom and opportunity in human history.

It has also brought (a) existential, complex global and local challenges, and (b) a centralizing of power. Centralized power cannot solve, and is often the cause of, these existential challenges.

The web is the single greatest (and maybe only) chance humanity has to address these challenges, because it can decentralize power and unleash the human ingenuity of millions of people.

But the web itself is being centralized and made less open. From locked-down content, to ring-fenced platforms, to the advertising/ economics of the web, to technology stacks. The largest and most powerful organizations and governments in the world are eroding the openness of the web.

Mozilla is probably the world’s best chance to reverse this trend. We are the only organization in the world that is championing a vision of openness on the web, has the scale to achieve it, and as a mission-driven, not-for-profit doesn’t have its purpose corrupted by shareholders and profit motives.

At the same time, this is such a wildly ambitious organizational vision that only a movement of talented people working together — volunteer Mozillians and our allies — has a chance to see this vision become a reality.

What’s truly energizing about my role is that the Mozilla brand, user-base, financial resources and mythology is a platform to build a participation function that can scale to directly enabling millions to take actions aligned with their own passions and beliefs. This can be at the leading edge of what anyone has done before in organizing people globally and locally. And when we are successful, the web will be the platform we need to address humanity’s most pressing challenges.

Finally, to quote a great Canadian Marshall McLuhan, “the medium is the message”. The pattern of working that Mozilla is pioneering is transformative (or will be with the organizational changes that have been articulated in the vision of radical participation) — open, self-organizing and adaptive, creativity from the edges, distributed leadership and voice, each and every Mozillian accountable to each other and for the whole.

At a meta level, these are key to the broader global social justice changes I believe in. This pattern, and its impact on the millions of deep relationships we can build through participation, may be another of Mozilla’s enduring legacy and impact.


Air MozillaWeb QA Weekly Meeting

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1200600] message about 2fa’s interaction with api authentication should be visible when 2fa is enabled
  • [1200610] jsonrpc API requests do not work when “Require API-Key authentication for API requests” is enabled due to missing API tokens
  • [1200618] Layout of the TOTP 2FA leads to missing the token time window and thus frustration
  • [1200957] when a non-admin edits a user the 2fa setting is incorrectly reported
  • [1200974] “API-Key” in 2fa text should be “API key” to match the rest of bugzilla
  • [1196618] add support for group owners
  • [1196508] Intern Request Metric Dashboard
  • [1200961] switching to the modal view doesn’t force the mozilla skin
  • [1201116] remove the duo mobile client from suggested apps due to its lack of handling of expired codes

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

Emma IrwinMozilla Thimble – Test Drive of Happiness

In the last few years I’ve learned a ton about what helps people learn, where they get stuck and how to customize learning for various ages, interests and attention-spans. When ‘teaching the web’ for kids as young as eight all the way up to university students there’s always some level of trouble-shooting and  tinkering to do with tools I’ve tried so far (both on and offline). Mozilla’s Thimble had been one of those tools, but usually for the very early steps in learning.  For more advanced lessons,  I’ve tried a number of different solutions, all with some level of challenge.  For example I turned more to codepen.io to show the separation of CSS/HTML & JS which was fun but only for super-short snippet-type learning.  I also ventured offline with simple editors like Notepad ++ only to run into knowledge-blockers with students – around file-systems or computer permissions for installing new software.

And so, I was super-excited to see the latest version of Thimble released this week – especially after I did some testing. Here’s why I’m  to go back to teaching with the (new) Thimble :

Customizable UI

The new Thimble allows you to expand your code view, or your preview as you need. Seems small, but huge change from the previous version.  With this, the brightness toggle, and text-size customization people will be able settle in to what works best for their learning.  And thank you, thank you – the preview screen stays at the exact-scroll position for refreshes.

Mobile View

Many kids  ask to make ‘apps’ in my classes,  when often what they really mean is “something  I can make and share on my phone”.  So while the mobile view is obviously great from the perspective of learning to design for mobile, it also helps students understand the web as a platform for their app ideas.  I imagine there’s more opportunity to extend that idea well beyond this as well with FFOS app preview perhaps.

File Management

A billion-times better.  Students can  now upload files they need vrs  ‘all code in one page’, or link to external files, which with  previous versions often resulted in mixed content errors.  Yes, so much awesome, including the ability to re-name files names AND upload entire directories, which makes it easy ( I think ) for people to fork and upload projects. I managed to exceed the maximum file-size for upload, but at 5MB seems pretty reasonable.  Having files lists also ‘bakes in’ an opportunity to teach file systems, best practices, naming conventions etc   – which in the past was offline only.  The only thing I couldn’t figure out, was how to download my project. Also next-wish : version control integration.

You can also take and upload ‘selfie’ images from your computer, which will be super-popular, especially for ‘photo booth’ type projects. Youth will love it, providing it passes the privacy agreements of students and schools – but then there’s a lesson to be made with this as well.

Suggested Attributes & Files

One of the biggest challenges and frustrations  of getting things working –  especially with younger kids is spelling mistakes of file names, attribute names – open tags, poorly nested tags… And so I’m thrilled to see suggestions & auto-complete as part of the new Thimble. Also showing which line has errors (without overwhelming popups) will be a huge help. I think there is also a way to use a color-wheel to add in hex colors (also helpful for younger learners), but I didn’t have a chance to test that.

Tutorial

Adding a tutorial.html file adds a ‘Tutorial’ view pane. I usually write my lessons in Google Docs, print and then give to students who are still learning to type, and so spend a lot of time looking from one to the other.   Huge win that instructors can write tutorials as part of the lesson, and that students can keep their eyes on the screen instead of bothering with a second set of instructions.  The only improvement I could ask for, would be the ability to assign specific tutorials, to files to create true lesson plans vrs one long file (also more value for sharing).

I’m sure there are a bunch of things I missed, but these are the wins for my classes.

Congratulations, and thank you to the Webmaker team, this is going to make things so so so much easier, and more rewarding for students and teachers.

On a separate note – I can’t help but think this would also help some of the curriculum development I’m working on – asking teams to develop content in Markdown.  I see there is a Markdown extension for Brackets, and wonder if Thimble can take on a new file type ‘markdown’ to help educators submit curriculum without coding knowledge.  Perhaps this is what that’s the potential for the tutorial file (and collaboration between educators and technologists)

 

Mozilla WebDev CommunityNode.js static file build steps in Python Heroku apps

I write a lot of webapps. I like to use Python for the backend, but most
frontend tools are written in Node.js. LESS gives me nicer style sheets, Babel
lets me write next-generation JavaScript, and NPM helps manage dependencies
nicely. As a result, most of my projects are polyglots that can be difficult to
deploy.

Modern workflows have already figured this out: Run all the tools. Most
READMEs I’ve written lately tend to look like this:

$ git clone https://github.example.com/foo/bar.git
$ cd git
$ pip install -r requirements.txt
$ npm install
$ gulp static-assets
$ python ./manage.py runserver

I like to deploy my projects using Heroku. They take care of the messy details
about deployment, but they don’t seem to support multi-language projects easily.
There are Python and Node buildpacks, but no clear way of combining the two.

Multi Buildpack

GitHub is littered with attempts to fix this by building new buildpacks.
The problem is they invariable fall out of compatibility with Heroku. I could
probably fix, but then I’d have to maintain them. I use Heroku to avoid
maintaining infrastructure; custom buildpacks are one step forward, but two
steps back.

Enter Multi Buildpack, which runs multiple buildpacks at once.

It is simple enough that it is unlike to fall out of compatibility. Heroku has a
fork of the project on their GitHub account, which implies that it will be
maintained in the future.

To configure the buildpack, first tell Heroku you want to use it:

$ heroku buildpacks:set https://github.com/heroku/heroku-buildpack-multi.git

Next, add a .buildpacks file to your project that lists the buildpacks to run:

https://github.com/heroku/heroku-buildpack-nodejs.git
https://github.com/heroku/heroku-buildpack-python.git

Buildpacks are executed in the order they’re listed in, allowing later
buildpacks to use the tools and scripts installed by earlier buildpacks.

The Problem With Python

There’s one problem: The Python buildpack moves files around, which makes it
incompatible with the way the Node buildpack installs commands. This means that
any asset compilation or minification done as a step of the Python buildpack
that depends on Node will fail.

The Python buildpack automatically detects a Django project and runs
./manage.py collectstatic. But the Node environment isn’t available, so this
fails. No static files get built.

There is a solution: bin/post_compile! If present in your repository, this
script will be run at the end of the build process. Because it runs outside of
the Python buildpack, commands installed by the Node buildpack are available and
will work correctly.

This trick works with any Python webapp, but lets use a Django project as an
example. I often use Django Pipeline for static asset compilation. Assets
are compiled using the command ./manage.py collectstatic, which, when properly
configured, will call all the Node commands.

#!/bin/bash
export PATH=/app/.heroku/node/bin:$PATH
./manage.py collectstatic --noinput

Alternatively, you could call Node tools like Gulp or Webpack directly.

In the case of Django Pipeline, it is also useful to disable the Python
buildpack from running collectstatic, since it will fail anyways. This is done
using an environment variable:

heroku config:set DISABLE_COLLECTSTATIC 1

Okay, so there is a little hack here. We still had to append the Node binary
folder to PATH. Pretend you didn’t see that! Or don’t, because you’ll need
to do it in your script too.

That’s it

To recap, this approach:

  1. Only uses buildpacks available from Heroku
  2. Supports any sort of Python and/or Node build steps
  3. Doesn’t require vendoring or pre-compiling any static assets

Woot!

Mozilla Reps CommunityReMoCamp and the Leadership Summit

As part of the plan for Global Gatherings the Participation team announced a new event called the “Leadership Summit”. We’re excited to have ReMoCamp be integrated at this event and to bring together a bigger set of leaders. Our initial plan was to have this event happen this year, just as we have done in the past with ReMoCamp, but we decided to move it to January to have more time to prepare a fantastic event.

The summit doesn’t stand alone, it is one part of broader initiative with MozFest and All Hands, with each gathering playing different roles in empowering communities. Learning more about your goals at Mozilla can help you understand which of these events is the best one for you.

The summit will be an opportunity for local and regional communities to work alongside Reps, and for all of us to define what leadership means in the context of Mozilla.  We are fully aware that the the words “leadership” and “leader” spark some discussion in the context of Mozilla. And we are actively working toward a framework for investing in leadership within our communities. We need all of our current leaders to help us define and shape a leadership culture that is true to our values and will unleash the potential of our communities.

Historically Reps has been a platform for Mozillians to be empowered, and in many cases Reps empowered their communities. It evolved into a broader leadership platform for communities, and one of our more organized ones in Mozilla. We see everyday how the magic of great community leadership ignites people all around the world to join our cause. We are more than ever committed to supporting and elevating our leaders and Reps is an important program for doing so.

Because Mozilla values leadership that can come from all the edges and we believe that empowering individuals and communities is our secret weapon, we want to make sure that local communities and Reps are connected, that anyone, on any edge, can be empowered. Our goal is that the leadership summit will catalyze the energy from Reps and other leaders and ensure we are all working as a powerful team.

Apart from bringing a different (and a bigger!) group of leaders together, we also believe that thinking about leadership and impact in the bigger picture that includes local communities will ultimately help the Reps program evolve to serve much better our communities. We will get “outside of the Reps box” and think holistically how to organize ourselves to have more impact. And of course we need Reps mentors, but also new Reps and local leaders to be part of these conversations. And as you know we’ll be doing this in Singapore, a location close to where our most thriving communities are right now.

We hope to learn a lot from this first event to be able to bring that experience and learning to many, if not all, volunteer leaders. And as with all we do: the leadership summit is an initiative that is open and that you can help shape. If you feel strongly about leadership, about regional or local communities, about empowering others or if you have ideas on what we need, just drop me or the participation team a line. We are eager to hear your thoughts and understand how to make this leadership summit the catalyzer we all want to see.

John O'DuinnThe USENIX Release Engineering Summit, Nov 2015

The USENIX Release Engineering Summit 2015 (“URES15″) is quickly approaching – this time it will be held in November in Washington DC along with LISA2015. To register to attend URES15, click here or on the logo.


Given the two (!) great URES conferences were last year, I expect URES15 to be fun and informative and very-down-to-earth practical all at the same time. One of the great things about these RelEng conferences is that you get to hear what did, and did not, work for others working the same front lines – all in a factual constructive way. Sharing important ideas help raise the bar for everyone, so every one of these feel priceless. It is also a great way to meet many other seasoned people in this niche part of the computer business,

If you are a Release Engineer, Site Reliability Engineer, Production Operations, deal with DevOps culture or are someone who keeps wanting to find a better way to reliably ship better quality code, you should attend! You’ll be glad you did!

Note: If you have a project that you worked on which others would find informative, you can submit a proposal here. The deadline for proposals is Friday 04sep2015.

See you there!

Mike RatcliffeInstalling VPN from the Linux command line

To stop people snooping on what we are doing on the internet or when restrictive governments prevent people from accessing popular websites you can use a VPN to protect your privacy and allow you to access the internet without restrictions.

Most VPN providers provide simple step-by-step instructions to get VPN set up on most operating systems but many fail to give instructions about how to autostart a VPN in a non graphical environment.

The following will work with the files provided by most VPN providers with a couple of small changes:

Get the Files from your Provider

Of course, the url that your provider uses may be different:

$ cd /tmp
$ wget https://www.privateinternetaccess.com/openvpn/openvpn.zip
$ unzip openvpn.zip
Copy Certificates to /etc/openvpn
$ cp ca.crt /etc/openvpn/
$ cp crl.pem /etc/openvpn/
Enable Autologin
$ vi /etc/openvpn/login.conf

The file should contain the following:

your-privateinternetaccess-username  
your-privateinternetaccess-password  
$ chmod 400 /etc/openvpn/login.conf
Point OpenVPN Towards the Right Files

The .ovpn files may not be by country so substitute them as appropriate.

$ vi Country.ovpn

Prefix the pem and crt paths with /etc/openvpn/

Change the auth-user-pass line to read auth-user-pass /etc/openvpn/login.conf

$ cp Country.ovpn /etc/openvpn/Country.conf
Autostart the VPN

"Country" is the name of your .conf file without and extension so in my case Country.conf (see the previous steps).

$ vi /etc/default/openvpn

Add AUTOSTART="Country"

Check that Autostart is Working
$ reboot

To check if the VPN is running open lynx and go to google.com. It should take you to the specific countries site.

That is it, you are done!

Mozilla Addons BlogAdd-ons Update – Week of 2015/09/02

I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

The Review Queues

  • Most nominations for full review are taking less than 10 weeks to review.
  • 110 nominations in the queue awaiting review.
  • Most updates are being reviewed within 6 weeks.
  • 69 updates in the queue awaiting review.
  • Most preliminary reviews are being reviewed within 10 weeks.
  • 149 preliminary review submissions in the queue awaiting review.

The unlisted queues aren’t mentioned here, but they are empty for the most part (there are actually a couple hundred add-ons awaiting review there but they are awaiting a bulk-review tool that is being worked on, since they belong to a couple of large sets of almost identical add-ons). We’re in the process of getting more help to reduce queue length and waiting times for all queues.

If you’re an add-on developer and would like to see add-ons reviewed faster, please consider joining us. Add-on reviewers get invited to Mozilla events and earn cool gear with their work. Visit our wiki page for more information.

Firefox 41 Compatibility

The compatibility blog post has been up for a while. The compatibility bump should be run soon.

Firefox 42 Compatibility

Expect the blog post to come up sometime next week.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition (formerly known as Aurora) to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Add-ons Forum

As we announced before, there’s a new add-ons community forum for all topics related to AMO or add-ons in general. The old forum is now gone, and just redirects to the new one.

Extension Signing

The wiki page on Extension Signing has information about the timeline, as well as responses to some frequently asked questions. The new add-on installation UI and signature warnings are now enabled in release versions of Firefox.

Electrolysis

Electrolysis, also known as e10s, is the next major compatibility change coming to Firefox. In a nutshell, Firefox will run on multiple processes now, running content code in a different process than browser code. This should improve responsiveness and overall stability, but it also means many add-ons will need to be updated to support this.

If you read Kev’s post on the future of add-on development, you should know there are big changes coming. We’re investing heavily on the new WebExtensions API, so we strongly recommend that you start looking into it for your add-ons. If you have requests for new APIs, please suggest them in the uservoice forum.

Air MozillaQuality Team (QA) Public Meeting

Quality Team (QA) Public Meeting This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

Smokey ArdissonEscaping a “stuck” iOS Numbers iCloud sync

Periodically1 Numbers will get “stuck” trying to sync to iCloud changes I’ve just made on my iPhone. Numbers will claim that the spreadsheet is updating, or needs to be updated, but no changes will ever reach iCloud and the “needs update” UI will never go away. This used to be a frequent issue,2 and over time I tried all sorts of things to resolve the problem: quitting Numbers, making another local change, making a change on iCloud, waiting a while, toggling Airplane mode on and off to “kill” networking, turning iCloud off in the Numbers settings, deleting the spreadsheet (both on the phone and also on iCloud), and other things I can no longer remember. Some combination of tinkering always seemed to fix things, but it was never anything easy—always a tangled combination of things—and The Internet™ didn’t seem to have any relevant information when I searched.

Eventually I discovered a reliable, repeatable, simple solution: turn the iPhone off and back on again,3 and the next time I open Numbers, it will happily finish syncing my changes from the iPhone to iCloud.

Hopefully the All-Seeing Internet Search Engines™ will index this post and offer it up to others who might also be searching for a solution.

Has anyone out there already filed a Radar on the “stuck” iCloud sync process? I’ll eventually :-( get around to filing it myself.

        

1 It’s difficult to tell what might be triggering the “stuck” sync, but my best guess based on when and where it happens to me is a marginal network connection or network transition. ↩︎

2 For a while I thought iOS 8.4 might have fixed the problem, but I’ve recently experienced it again. ↩︎

3 I’ve always quit Numbers before powering off, and the “stuck” sync no longer occurs often enough that I can remember to test rebooting without quitting Numbers first. ↩︎

Karl DubostWebCompat.com Just got Modern Cathode Ray Tube Screenshots

When reporting bugs for Web Compatibility, it is useful to see what the person is reporting. Often an image helps right away to identify what is wrong in the page. "Something is wrong in the page" can be sometimes difficult to understand.

Mike Taylor last week just released a feature we wanted for a long time: uploading a screenshot of the issue.

A Visible Issue

Let's go through an example. This site clearly exhibits an issue in the banner the size of the blue top button and the menu which is gone from the main viewport.

Two renderings in different engines

When reporting the issue about this site on webcompat.com, we want to upload a screenshot to show it.

Screenshot DOM Nodes

We could upload a full screenshot or just the part which shows the issue. There is a very practical feature in Firefox Developer Tools: Screenshot Node.

Go to the inspector select the DOM node where the issue is visible and call the contextual menu. In the list of features, you will discover: Screenshot Node.

Page and Inspector side by side

Once done, you will get in the usual browser download folder, the screenshot which has just been taken.

Issue Screenshot

Upload The Screenshot

We can now head to WebCompat.com to report the issue.

Webcompat form

At the bottom of the form there is a "Attach a screenshot image". Once done, our report is fully complete with the screenshot of the node.

And if you are interested by the bug it is probably a flexbox bug using the old syntax.

Otsukare!

Daniel StenbergThe TLS trinity dance

In the curl project we currently support eleven different TLS libraries. That is 8 libraries and the OpenSSL “trinity” consisting of BoringSSL, libressl and of course OpenSSL itself.

You could easily be mislead into believing that supporting three libraries that all have a common base would be reallytrinity easy since they have the same API. But no, it isn’t. Sure, they have the same foundation and they all three have more in common that they differ but still, they all diverge in their own little ways and from my stand-point libressl seems to be the one that causes us the least friction going forward.

Let me also stress that I’m but a user of these projects, I don’t participate in their work and I don’t have any insights into their internal doings or greater goals.

libressl

Easy-peacy, very similar to OpenSSL. The biggest obstacle might be that the version numbering is different so an old program that might be adjusted to different OpenSSL features based on version numbers (like curl was) needs some adjusting. There’s a convenient LIBRESSL_VERSION_NUMBER define to detect libressl with.

OpenSSL

I regularly build curl against OpenSSL from their git master to get an early head-start when they change things and break backwards compatibility. They’ve increased that behavior since Heartbleed and while I generally agree with their ambitions on making more structs opaque instead of exposing all internals, it also hurts us over and over again when they remove things we’ve been using for years. What’s “funny” is that in almost all cases, their response is “well use this way instead” and it has turned out that there’s an equally old API that is still there that we can use instead. It also tells something about their documentation situation when that is such a common pattern. It’s never been possible to grasp this from just reading docs.

BoringSSL

BoringSSL has made great inroads in the market and is used on Android now and more. They don’t do releases(!) and have no version numbers so the only thing we can do is to build from git and there’s no install target in the makefile. There’s no docs for it, they remove APIs from OpenSSL (curl can’t support NTLM nor OCSP stapling when built with it), they’ve changed several data types in the API making it really hard to build curl without warnings. Funnily, they also introduced non-namespaced typedefs prefixed with X509_* that collide with other common headers.

How it can play out in real life

A while ago we noticed BoringSSL had removed the DES_set_odd_parity function which we use in curl. We changed the configure script to look for it and changed the code to survive without it. The lack of that function then also signaled that it wasn’t OpenSSL, it was BoringSSL

BoringSSL moved around things that caused our configure script to no longer detect it as “OpenSSL compliant” because CRYPTO_lock could no longer be found by configure. We changed it to instead search for HMAC_Init and we were fine again.

Time passed and BoringSSL brought back DES_set_odd_parity, so our configure script no longer saw it as BoringSSL (the Android fixed this problem in their git but never sent as the fix). We changed the configure script accordingly to properly use OPENSSL_IS_BORINGSSL instead to detect BoringSSL which was the correct thing anyway and now as a bonus it can thus detect and work with both new and old BoringSSL versions.

A short time after, I again try to build curl against the OpenSSL master branch only to realize they’ve deprecated HMAC_Init that we just recently switched to for detection (since the configure script needs to check for a particular named function within a library to really know that it has detected and can use said library). Sigh, we switched “detect function” again to HMAC_Update. Hopefully this exists in all three and will stick around for a while…

Right now I think we can detect and use all three. It is only a matter of time until one of them will ruin that and we will adapt again.

Monty MontgomeryComments on the Alliance for Open Media, or, "Oh Man, What a Day"

I assume folks who follow video codecs and digital media have already noticed the brand new Alliance for Open Media jointly announced by Amazon, Cisco, Google, Intel, Microsoft, Mozilla and Netflix. I expect the list of member companies to grow somewhat in the near future.

One thing that's come up several times today: People contacting Xiph to see if we're worried this detracts from the IETF's NETVC codec effort. The way the aomedia.org website reads right now, it might sound as if this is competing development. It's not; it's something quite different and complementary.

Open source codec developers need a place to collaborate on and share patent analysis in a forum protected by client-attorney privilege, something the IETF can't provide. AOMedia is to be that forum. I'm sure some development discussion will happen there, probably quite a bit in fact, but pooled IP review is the reason it exists.

It's also probably accurate to view the Alliance for Open Media (the Rebel Alliance?) as part of an industry pushback against the licensing lunacy made obvious by HEVCAdvance. Dan Rayburn at Streaming Media reports a third HEVC licensing pool is about to surface. To-date, we've not yet seen licensing terms on more than half of the known HEVC patents out there.

In any case, HEVC is becoming rather expensive, and yet increasingly uncertain licensing-wise. Licensing uncertainty gives responsible companies the tummy troubles. Some of the largest companies in the world are seriously considering starting over rather than bet on the mess...

Is this, at long last, what a tipping point feels like?

Oh, and one more thing--

As of today, just after Microsoft announced its membership in the Open Media Alliance, they also quietly changed the internal development status of Vorbis, Opus, WebM and VP9 to indicate they intend to ship all of the above in the new Windows Edge browser. Cue spooky X-files theme music.

Smokey ArdissonQuoting John Gruber

John Gruber:

Their old logo was goofy. This new one is simply garbage.

The redeeming quality of the previous Google logos was the whimsy, while the serifs kept them from looking too childlike or immature.

The new logo, however, is simply puerile. Someone needs to take away the Play-Doh.

About:CommunityParticipation Lab Notes: Short Simple Tasks Increase Engagement

The Mozilla Participation Lab is an initiative that focuses on experimenting and surfacing new approaches to Participation. Read more about the Lab on our wiki page.

Every year, about 7 million people come to the contribute page on Mozilla.org looking for information about how to get involved with Mozilla. These visitors represent an exciting opportunity to increase the number of long-term relationships Mozilla has with people who are passionate about the open web.

Right now, however, only about 0.76% of those people ever register to contribute. Even once they’re registered, how we connect them to their contribution area of interest is far from optimized.

In partnership with the Mozilla.org team, the Participation Team has set out on a medium-term project to try and maximize the potential of this page. Our aim is to make it a better tool for potential contributors, Mozilla, and the mission.

This first experiment set out to figure out how to increase engagement with tasks presented to a visitor after they said they were interested in contributing. We designed and ran an A/B test over a few weeks, presenting visitors with different types of tasks.

tl;dr

  • we were able to increase the engagement rate to 60% (from 15%) in selecting tasks
  • visitors preferred simple tasks to challenging tasks, and shorter tasks to long tasks
  • presenting more options (6) versus fewer options (2) increased the number of people who chose to pursue a task

Keep reading to learn more about the results and the future iterations and experiments we have planned.

Experiment

In this A/B test we created four new versions of the contribute sign-up page. Each page presented viewers with a number of tasks categorized either as simple/challenging (variation a) or as taking a little time/more time (variation b). For each of these variations we displayed either 2 or 6 tasks per page.

Variation B : Simple vs. Challenging

Variation B : Simple vs. Challenging

Variation A : Little Time vs. More Time

Variation A : Little Time vs. More Time

For the purpose of this first test we did not track engagement past choosing the specific task, and we did not track engagement with individual tasks, measuring only the category of the tasks selected (eg. simple/challenging or little/more time).

Results

The major findings from this first round of testing were that visitors to the contribute page preferred simple tasks to challenging tasks, and shorter tasks to long tasks. We also found that presenting more options (6) increased the number of people who chose to pursue a task instead of selecting the “not ready” button. Finally across all of the page variations we found the percentage of people who engaged with the pages (either by choosing a task or selecting not-ready) was 60%, much higher than the 15% engagement rate of the original page.

Based on all of the results we can infer that majority of visitors to the contribute page prefer easier, low-barrier tasks, and prefer a handful of choices to just 2.

However there were still a minority of individuals who expressed a preference for more challenging tasks — they may represent skilled contributors who are ready to become deeply engaged in complex projects. In coming iterations of this experiment we will be exploring the level of readiness and skill of these contributors, and whether individuals from this group are more likely to become core contributors than those who selected simpler tasks.

Overall, the enormous engagement rate by both groups further supports the amazing opportunity latent in this page.

Next Steps

Over the coming months we will undertake several rapid iterations on this test. There is a great deal still to be learned, however we believe that through experimenting with participation in venues like this we can unlock huge potential to help people engage with Mozilla in high impact ways.

Over the next few weeks the Participation team will be working to design a second experiment and we will focus on:

  • Enhancing our understanding of the specific kinds of tasks contributors are most interested in
  • How the task completion rate varies among types of tasks and levels of difficulty and what this tells us about the contributors we are able to attract through this page
  • How wording and framing of tasks effects task selection and completion

Other topics we will seek to explore in the longer-term include:

  • Understanding best practices for how to setup second and third tasks to engage contributors in a more long-term way
  • How presenting localized tasks affects completion rate How to identify and present tasks with the maximum appeal to contribute page contributors
  • How to provide “off-shoots” for the non-typical contribute page contributors (such as those who want to contribute code)
  • How to increase the conversion rate from the homepage to the contribute sign-up page If, when and how to collect email sign-ups and what the process is for engaging contributors by email

This project is currently owned by Lucy from the Participation Team in conjunction with the Mozilla.org Team. If you have any questions, ideas for future tests, or if you just want to chat about the future of this page please do not hesitate to get in touch!

Air MozillaWebdev Extravaganza: September 2015

Webdev Extravaganza: September 2015 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on.

The Mozilla BlogForging an Alliance for Royalty-Free Video

Things are moving fast for royalty-free video codecs. A month ago, the IETF NETVC Working Group had its first meeting and two weeks ago Cisco announced Thor. Today, we’re taking the next big step in this industry-wide effort with the formation of the Alliance for Open Media. Its founding members represent some of the biggest names in online video, such as Netflix, Amazon, and YouTube, multiple browser vendors including Mozilla, Microsoft, and Google, and key technology providers like Cisco and Intel. The Alliance has come together to share technology and run the kind of patent analysis necessary to build a next-generation royalty-free video codec.

Mozilla has long championed royalty-free codecs. The Web was built on innovation without asking permission, and patent licensing regimes are incompatible with some of the Web’s most successful business models. That’s why we already support great codecs like VP8, VP9, and Opus in Firefox. But the Web doesn’t stand still and neither do we. As resolutions and framerates increase, the need for more advanced codecs with ever-better compression ratios will only grow. We started our own Daala project and formed NETVC to meet those needs, and we’ve seen explosive interest in the result. We believe that Daala, Cisco’s Thor, and Google’s VP10 combine to form an excellent basis for a truly world-class royalty-free codec.

In order to allow us to move quickly, the alliance is structured as a Joint Development Foundation project. These are an ideal complement to a larger, open standards organization like the IETF: One of the biggest challenges in developing open standards in a field like video codecs is figuring out how to review the patents. The Alliance provides a venue for us to share the legal legwork without having to worry about it being used against us down the road. That distributes the load, allows us to innovate faster and cheaper, and gives everyone more confidence that we are really producing a royalty-free codec.

The Alliance will operate under W3C patent rules and release code under an Apache 2.0 license. This means all Alliance participants are waiving royalties both for the codec implementation and for any patents on the codec itself. The initial members are just a start. We invite anyone with an interest in video, online or off, to join us.

For further information please visit www.aomedia.org or view the press release.

Gregory SzorcServing Mercurial Clones from a CDN

For the past few months, Mozilla has been serving Mercurial clones from Amazon S3. We upload snapshots (called bundles) of large and/or high-traffic repositories to S3. We have a custom Mercurial extension on the client and server that knows how to exchange the URLs for these snapshots and to transparently use them to bootstrap a clone. The end result is drastically reduced Mercurial server load and faster clone times. The benefits are seriously ridiculous when you operate version control at scale.

Amazon CloudFront is a CDN. You can easily configure it up to be backed by an S3 bucket. So we did.

https://hg.cdn.mozilla.net/ is Mozilla's CDN for hosting Mercurial data. Currently it's just bundles to be used for cloning.

As of today, if you install the bundleclone Mercurial extension and hg clone a repository on hg.mozilla.org such as mozilla-central (hg clone https://hg.mozilla.org/mozilla-central), the CDN URLs will be preferred by default. (Previously we preferred S3 URLs that hit servers in Oregon, USA.)

This should result in clone time reductions for Mozillians not close to Oregon, USA, as the CloudFront CDN has servers all across the globe and your Mercurial clone should be bootstrapped from the closest and hopefully therefore fastest server to you.

Unfortunately, you do need the the aforementioned bundleclone extension installed for this to work. But, this should only be temporary: I've proposed integrating this feature into the core of Mercurial so if a client talks to a server advertising pre-generated bundles the clone offload just works. I already have tentative buy-in from one Mercurial maintainer. So hopefully I can land this feature in Mercurial 3.6, which will be released November 1. After that, I imagine some high-traffic Mercurial servers (such as Bitbucket) will be very keen to deploy this so CPU load on their servers is drastically reduced.

Air MozillaMartes mozilleros

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.

Byron Joneshappy bmo push day!

the following changes have been pushed to bugzilla.mozilla.org:

  • [1197073] add support for 2fa using totp (eg. google authenticator)
  • [1199136] security bug group moves for new core-security-release group
  • [1199941] inactive sessions should expire faster (a week)

today’s push lands initial support for two-factor authentication on BMO. we currently support time-based one-time passwords (totp) with protection around just logging in. 2fa protection will be extended to protect other actions in the upcoming weeks.

visit the ‘two-factor authentication‘ section under your user preferences to enable 2fa.

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla

QMOFirefox 41 Beta 7 Testday, September 4th

I’m writing to let you know that this Friday, September 4th, we’ll be hosting the Firefox 41.0 Beta 7 Testday. The main focus of this event is going to be set on Flash on 64-bit Firefox builds and plug-ins testing. Detailed participation instructions are available in this etherpad.

No previous testing experience is required so feel free to join us on the #qa IRC channel and our moderators will make sure you’ve got everything you need to get started.

Hope to see you all on Friday! Let’s make Firefox better together!

Daniel StenbergBlog refresh

Dear reader,

If you ever visited my blog in the past and you see this, you should’ve noticed a pretty significant difference in appearance that happened the other day here.

When I kicked off my blog here on the site back in August 2007 and moved my blogging from advogato to self-host, I installed WordPress and I’ve been happy with it since then from a usability stand-point. I crafted a look based on an existing theme and left it at that.

Over time, WordPress has had its hefty amount of security problems over and over again and I’ve also suffered from them myself a couple of times, and a few times I ended up patching it manually more than once. At one point when I decided to bite the bullet and upgrade to the latest version it didn’t work to upgrade anymore and I postpone it for later.

Time passed, I tried again without success and then more time passed.

I finally fixed the issues I had with upgrading. With a series of manual fiddling I finally managed to upgrade to the latest WordPress and when doing so my old theme was considered broken/incompatible so I threw that out and started fresh with a new theme. This new one is based on one of the simple default ones WordPress ships for free. I’ve mostly just made it slightly wider and edited the looks somewhat. I don’t need fancy. Hopefully I’ll be able to keep up with WordPress better this time.

Additionally, I added a captcha that now forces users to solve an easy math problem to submit anything to the blog to help me fight spam, and perhaps even more to solve a problem I have with spambots creating new users. I removed over 3300 users yesterday that never posted anything that has been accepted.

Enjoy.  Now back to our regular programming!

Seif LotfyCounting flows (Semi-evaluation of CMS, CML and PMC)

Assume we have a stream of events coming in one at a time, and we need to count the frequency of the different types of events in the stream.

In other words: We are receiving fruits one at a time in no given order, and at any given time we need to be able to answer how many of a specific fruit did we receive.

The most naive implementation is a dictionary in the form of , and is most accurate and suitable for streams with limited types of events.

Let us assume a unique item consists of 15 bytes and has a dedicated uint32 (4 bytes) counter assigned to it.

At 10 million unique items we end up using 19 MB which is a bit much, but on the plus side its as accurate as it gets.

But what if we don't have the 19 MB. Or what if we have to keep track of several streams?

Maybe saving to a DB? Well when querying the DB upon request, something in the lines of:

SELECT count(event) WHERE event = ?)

The more items we add, the more resource intensive the query becomes.

Thankfully solutions come in the form of Probabalistic datastructures (sketches).

I won't get into details but to solve this problem I semi-evaluated the following data structures:

  • Count-Min sketch (CMS) [2]
  • Count-Min-Log sketch (CML) [1][2]
  • Probabilistic Multiplicity Counting sketch (PMC) [1]

Test details:

For each sketch I linearly added a new flow with equivalently linear events. So the first flow got 1 event inserted. The second flow for 2 events inserted, all the way up to 10k-th flow with 10k events inserted.

flow 1: 1 event  
flow 2: 2 events  
...
flow 10000: 10000 events  

All three data structures were configured to have a size of 217KB (exactly 1739712 bits).

A couple dozen runs yielded the following results (based on my unoptimized code esp. for PMC and CML)

CMS: 07s for 50005000 insertion (fill rate: 31%)  
CML: 42s for 50005000 insertion (fill rate: 09%)  
PMC: 18s for 50005000 insertion (fill rate: 54%)  

CMS with ɛ: 0.0001, δ: 0.99 (code) CMS

Observe the biased estimation of CMS. CMS will never underestimate. In our case looking at the top border of the diagram we can see the there was a lot of overestimation.

CML with ɛ: 0.000025, δ: 0.99 (16-bit counters) (code)CML

Just like CMS, CML is also biased and will never underestimate. However unlike CMS the top border of the diagram is less noisy. Yet accuracy seems to be decreasing for the high counting flows.

PMC with (256x32) virtual metrices (code) PMC

Unlike the previous two sketches. This sketch is unbiased, so underestimations exist. Also the estimate flow count increases with the actual flow count (linearly bigger errors). The drawback here is that PMC gets filled up very quickly which means at some point it will just have everything overestimated. It is recommended to know what the max num of different flows will be beforehand.

Bringing it all together ALL

So what do you think. If you are familiar with these algorithms or can propose a different benchmarking scenario, please comment. I might be able to work on that on a weekend. The code was all written in Go, feel free to suggest optimizations of fix any bugs found (links above respective plots).

QMOAn open letter about Mozilla QA

Dear people of the web,

As some of you may already be aware, Mozilla has experienced a lot of change over the years. Most teams and projects within Mozilla have felt this change in some way, either directly or indirectly. The QA Team is no exception.

As a microcosm of the Mozilla Project, people involved in many disparate projects, QA has changed course many times. To many of you, these changes may have passed by unnoticed. Perhaps you noticed something was different about QA but were not able to understand how or why things had changed. Perhaps it was a feeling, that some of us seemed more distant, or that it just felt different.

This may come as a surprise to some, but there is no longer a single, unified QA team at Mozilla. After going through a few re-organizations, we are spread across the organization, embedded with — and reporting to — various product teams.

Those teams have benefited by having a dedicated QA person on staff full time. However, with so few of us to go around, many teams find themselves without any QA. In this state, we’ve lost the distinguished central QA organization that once was and in doing so we’ve lost a central QA voice.

As a result of these changes and a sense of perpetual reorganization we have reached a tipping point. We’ve lost some very talented and passionate people. Change within itself isn’t a bad thing. The loss of cohesion is. It is time to break this pattern, regain our cohesion and regain our focus on the community.

The core group of QA community members, paid and volunteer, will soon be getting together to formulate a mission statement. We’ll do this with a series of one-on-one conversations between core individuals who are interested in architecting a new QA community.  This will serve as the guiding light of our journey toward a more optimistic future together.

In recognition of those who might feel excluded from this process, we want to assure you that there will be opportunity to contribute very early on. Conducting these one on ones is just the  first step in a very long journey. We plan to bring everyone along who wants to be here, but this process requires great care and it will take time. If you’d like to help us build the future please get in touch with us.

Please read our wiki page to find out more about what we’re doing and where we’re going.

 

Sincerely,

Anthony Hughes and Matt Brandt

Matt ThompsonWhat we’re working on

What we got done:

What we’re doing next:

Next steps on the Mozilla Learning plan

Stuff we’re working on for the Sep 8 community call:

  • Leadership: focusing on a) audience tiers / ladder of engagement, and b) an analysis of our current offerings.
  • Advocacy: fleshing out our potential core activities / tactics. Plus mapping user journeys against them.
  • Impact: synthesizing the feedback we got on impact statements and milestones from the last community call.

Oh Hey — we’re launching the new Thimble!

https://thimble.mozilla.org

Manish GoregaokarDesigning a GC in Rust

For a while I’ve been working on a garbage collector for Rust with Michael Layzell. I thought this would be a good time to talk of our design and progress so far.

Motivation

“Wait”, you ask, “why does Rust need a garbage collector”? Rust is supposed to work without a GC, that’s one of its main selling points!

True. Rust does work pretty well without a GC. It’s managed to do without one so far, and we still have all sorts of well-written crates out there (none of which use a GC).

But Rust is not just about low-cost memory safety. It’s also about choosing your costs and guarantees. Box<T> and stack allocation are not always sufficient, sometimes one needs to reach for something like Rc<T> (reference counting). But even Rc is not perfect; it can’t handle cycles between pointers. There are solutions to that issue like using Weak<T>, but that only works in limited cases (when you know what the points-to graph looks like at compile time), and isn’t very ergonomic.

Cases where one needs to maintain a complicated, dynamic graph are where a GC becomes useful. Similarly, if one is writing an interpreter for a GCd language, having a GC in Rust would simplify things a lot.

Not to say that one should pervasively use a GC in Rust. Similar to Rc<T>, it’s best to use regular ownership-based memory management as much as possible, and sprinkle Rc/Gc in places where your code needs it.

Previous designs

This isn’t the first GC in Rust. Automatic memory management has existed before in various forms, but all were limited.

Besides the ones listed below, Nick Fitzgerald’s cycle collector based on this paper exists and is something that you should look into if you’re interested. There’s also an RFC by Peter Liniker which sketches out a design for an immutable GC.

Core Rust GC(s)

Rust itself had a garbage collector until a bit more than a year ago. These “managed pointers” (@T) were part of the language. They were removed later with a plan to make GC a library feature.

I believe these were basically reference counted (cycle collected?) pointers with some language integration, but I’m not sure.

Nowadays, the only form of automatic memory management in Rust are via Rc and Arc which are nonatomic and atomic reference counted pointers respectively. In other words, they keep track of the number of shared references via a reference count (incremented when it is cloned, decremented when destructors run). If the reference count reaches zero, the contents are cleaned up.

This is a pretty useful abstraction, however, as mentioned above, it doesn’t let you create cycles without leaking them.

Spidermonkey

You can read more about Servo’s Spidermonkey bindings in this blog post (somewhat outdated, but still relevant)

In Servo we use bindings to the Spidermonkey Javascript engine. Since Javascript is a garbage collected language, the Rust representations of Javascript objects are also garbage collected.

Of course, this sort of GC isn’t really useful for generic use since it comes bundled with a JS runtime. However, the Rust side of the GC is of a design that could be used in an independent library.

The Rust side of the Spidermonkey GC is done through a bunch of smart pointers, and a trait called JSTraceable. JSTraceable is a trait which can “trace” recursively down some data, finding and marking all GC-managed objects inside it. This is autoderived using Rust’s plugin infrastructure, so a simple #[jstraceable] annotation will generate trace hooks for the struct it is on.

Now, we have various smart pointers. The first is JS<T>. This is opaque, but can be held by other GC-managed structs. To use this on the stack, this must be explicitly rooted, via .root(). This produces a Root<T>, which can be dereferenced to get the inner object. When the Root is created, the contained object is listed in a collection of “roots” in a global. A root indicates that the value is being used on the stack somewhere, and the GC starts tracing usage from these roots. When the Root<T> is destroyed, the root is removed.

The problem with this is that JS<T> doesn’t work on the stack. There is no way for the GC to know that we are holding on to JS<T> on the stack. So, if I copy a JS<T> to the stack, remove all references to it from objects in the GC heap, and trigger a collection, the JS<T> will still be around on the stack after collection since the GC can’t trace to it. If I attempt to root it, I may get a panic or a segfault depending on the implementation.

To protect against this, we have a bunch of lints. The relevant one here protects against JS<T> from being carried around on the stack; but like most lints, it’s not perfect.

To summarize: Spidermonkey gives us a good GC. However using it for a generic Rust program is ill advised. Additionally, Servo’s wrappers around the GC are cheap, but need lints for safety. While it would probably be possible to write safer wrappers for general usage, it’s pretty impractical to carry around a JS runtime when you don’t need one.

However, Spidermonkey’s GC did inspire me to think more into the matter.

Brainstorming a design

For quite a while I’d had various ideas about GCs. Most were simplifications of Servo’s wrappers (there’s some complexity brought in there by Spidermonkey that’s not necessary for a general GC). Most were tracing/rooting with mark-and-sweep collection. All of them used lints. Being rather busy, I didn’t really work on it past that, but planned to work on it if I could find someone to work with.

One day, Michael pinged me on IRC and asked me about GCs. Lots of people knew that I was interested in writing a GC for Rust, and one of them directed him to me when he expressed a similar interest.

So we started discussing GCs. We settled on a tracing mark-and-sweep GC. In other words, the GC runs regular “sweeps” where it first “traces” the usage of all objects and marks them and their children as used, and then sweeps up all unused objects.

This model on its own has a flaw. It doesn’t know about GC pointers held on the stack as local variables (“stack roots”). There are multiple methods for solving this. We’ve already seen one above in the Spidermonkey design – maintain two types of pointers (one for the stack, one for the heap), and try very hard using static analysis to ensure that they don’t cross over.

A common model (used by GCs like Boehm, called “conservative GCs”) is to do something called “stack scanning”. In such a system, the GC goes down the stack looking for things which may perhaps be GC pointers. Generally the GC allocates objects in known regions of the memory, so a GC pointer is any value on the stack which belongs to one of these regions.

Of course, this makes garbage collection rather inefficient, and will miss cases like Box<Gc<T>> where the GCd pointer is accessible, but through a non-GC pointer.

We decided rather early on that we didn’t want a GC based on lints or stack scanning. Both are rather suboptimal solutions in my opinion, and very hard to make sound1. We were also hoping that Rust’s type system and ownership semantics could help us in designing a good, safe, API.

So, we needed a way to keep track of roots, and we needed a way to trace objects.

Tracing

The latter part was easy. We wrote a compiler plugin (well, we stole Servo’s tracing plugin which I’d written earlier) which autoderives an implementation of the Trace trait on any given struct or enum, using the same internal infrastructure that #[derive(PartialEq)] and the rest use. So, with just the following code, it’s easy to make a struct or enum gc-friendly:

#[derive(Trace)]
struct Foo {
    x: u8,
    y: Bar,
}

#[derive(Trace)]
enum Bar {
    Baz(u8), Quux
}

For a foo of type Foo foo.trace(), will expand to a call of foo.x.trace() and foo.y.trace(). bar.trace() will check which variant it is and call trace() on the u8 inside if it’s a Baz. For most structs this turns out to be a no-op and is often optimized away by inlining, but if a struct contains a Gc<T>, the special implementation of Trace for Gc<T> will “mark” the traceability of the Gc<T>. Types without Trace implemented cannot be used in types implementing Trace or in a Gc, which is enforced with a T: Trace bound on Gc<T>.

So, we have a way of walking the fields of a given object and finding inner Gc<T>s. Splendid. This lets us write the mark&sweep phase easily: Take the list of known reachable Gc<T>s, walk their contents until you find more Gc<T>s (marking all you find), and clean up any which aren’t reachable.

Rooting

Of course, now we have to solve the problem of keeping track of the known reachable Gc<T>s, i.e. the roots. This is a hard problem to solve without language support, and I hope that eventually we might be able to get the language hooks necessary to solve it. LLVM has support for tracking GCthings on the stack, and some day we may be able to leverage that in Rust.

As noted above, Spidermonkey’s solution was to have non-rooted (non-dereferencable) heap pointers, which can be explicitly converted to rooted pointers and then read.

We went the other way. All Gc<T> pointers, when created, are considered “rooted”. The instance of Gc<T> has a “rooted” bit set to true, and the underlying shared box (GcBox, though this is not a public interface) has its “root count” set to one.

When this Gc<T> is cloned, an identical Gc<T> (with rooted bit set to true) is returned, and the underlying root count is incremented. Cloning a Gc does not perform a deep copy.

let a = Gc::new(20); // a.root = true, (*a.ptr).roots = 1, (*a.ptr).data = 20

// ptr points to the underlying box, which contains the data as well as
// GC metadata like the root count. `Gc::new()` will allocate this box

let b = a.clone(); // b.root = true, (*a.ptr).roots++, b.ptr = a.ptr

This is rather similar to how Rc works, however there is no root field, and the roots counter is called a “reference counter”.

For regular local sharing, it is recommended to just use a borrowed reference to the inner variable (borrowing works fine with rust-gc!) since there is no cost to creating this reference.

When a GC thing is put inside another GC thing, the first thing no longer can remain a root. This is handled by “unrooting” the first GC thing:

struct Foo {
    bar: u32,
    baz: Gc<u32>,
}

let a = Gc::new(20); // why anyone would want to GC an integer I'll never know
                     // but I'll stick with this example since it's simple

let b = Gc::new(Foo {bar: 1, baz: a});
// a.root = false, (*a.ptr).roots--
// b initialized similar to previous example

// `a` was moved into `b`, so now `a` cannot be accessed directly here
// other than through `b`, and `a` is no longer a root.
// To avoid moving a, passing `a.clone()` to `b` will work

Of course, we need a way to traverse the object passed to the Gc<T>, in this case Foo, and look for any contained Gc<T>s to unroot. Sound familiar? This needs the same mechanism that trace() needed! We add struct-walking root() and unroot() methods to the Trace trait which are auto- derived exactly the same way, and continue. (We don’t need root() right now, but we will need it later on).

Now, during collection, we can just traverse the list of GcBoxs and use the ones with a nonzero root count as roots for our mark traversal.

So far, so good. We have a pretty sound design for a GC that works … for immutable data.

Mutability

Like Rc<T>, Gc<T> is by default immutable. Rust abhors aliasable mutability, even in single threaded contexts, and both these smart pointers allow aliasing.

Mutation poses a problem for our GC, beyond the regular problems of aliasable mutability: It’s possible to move rooted things into heap objects and vice versa:

let x = Gc::new(20);

let y = Gc::new(None);

*y = Some(x); // uh oh, x is still considered rooted!

// and the reverse!

let y = Gc::new(Some(Gc::new(20)));

let x = y.take(); // x was never rooted!
// `take()` moves the `Some(Gc<u32>)` out of `y`, replaces it with `None`       

Since Gc<T> doesn’t implement DerefMut, none of this is possible — one cannot mutate the inner data. This is one of the places where Rust’s ownership/mutability system works out awesomely in our favor.

Of course, an immutable GC isn’t very useful. We can’t even create cycles in an immutable GC, so why would anyone need this in the first place2?

So of course, we needed to make it somehow mutable. People using Rc<T> solve this problem by using RefCell<T>, which maintains something similar to the borrow semantics at runtime and is internally mutable. RefCell<T> itself can’t be used by us since it doesn’t guard against the problem illustrated above (and hence won’t implement Trace, but a similar cell type would work).

So we created GcCell<T>. This behaves just like RefCell<T>, except that it will root() before beginning a mutable borrow, and unroot() before ending it (well, only if it itself is not rooted, which is tracked by an internal field similar to Gc<T>). Now, everything is safe:

#[derive(Trace)]
struct Foo {
    a: u8,
    b: GcCell<Gc<u8>>,
}

let x = Gc::new(20);

let y = Gc::new(Foo {a: 10, b: Gc::new(30)});
{
    *y.b.borrow_mut() = x; // the `Gc(30)` from `y.b` was rooted by this call
                           // but since we don't actually use it here,
                           // the destructor gets rid of it.
                           // We could use swap() to retain access to it.
    // ...
    // x unrooted
}


// and the reverse case works too:

let y = Gc::new(GcCell::new(Some(Gc::new(20))));

let x = y.borrow_mut().take(); // the inner `Some(Gc(20))` gets rooted by `borrow_mut()`
                               // before `x` can access it

So now, mutation works too! We have a working garbage collector!

Open problems

Destructors

I believe this can be solved without lints, but it may require some upcoming features of Rust to be implemented first (like specialization).

In essence, destructors implemented on a value inside Gc<T> can be unsafe. This will only happen if they try to access values within a Gc<T> — if they do, they may come across a box that has already been collected, or they may lengthen the lifetime of a box scheduled to be collected.

The basic solution to this is to use “finalizers” instead of destructors. Finalizers, like in Java, are not guaranteed to run. However, we may need further drop hooks or trait specialization to make an airtight interface for this. I don’t have a concrete design for this yet, though.

Concurrency

Our model mostly just works in a concurrent situation (with thread safety tweaks, of course); in fact it’s possible to make it so that the concurrent GC will not “stop the world” unless someone tries to do a write to a GcCell. We have an experimental concurrent GC in this pull request. We still need to figure out how to make interop between both GCs safe, though we may just end up making them such that an object using one GC cannot be fed to an object using the other.

Performance

So far we haven’t really focused on performance, and worked on ensuring safety. Our collection triggering algorithm, for example, was horribly inefficient, though we planned on improving it. The wonderful Huon fixed this, though.

Similarly, we haven’t yet optimized storage. We have some ideas which we may work on later. (If you want to help, contributions welcome!)

Cross-crate deriving

Currently, an object deriving Trace should have Traceable children. This isn’t always possible when members from another crate (which does not depend on rust-gc) are involved. At the moment, we allow an #[unsafe_ignore_trace] annotation on fields which are of this type (which excludes it from being traced – if that crate doesn’t transitively depend on rust-gc, its members cannot contain GCthings anyway unless generics are involved). It should be possible to detect whether or not this is safe, and/or autoderive Trace using the opt-in builtin traits framework (needs specialization to work), but at the moment we don’t do anything other than expose that annotation.

Stdlib support for a global Trace trait that everyone derives would be awesome.

Conclusion

Designing a GC was a wonderful experience! I didn’t get to write much code (I was busy and Michael was able to implement most of it overnight because he’s totally awesome), but the long design discussions followed by trying to figure out holes in the GC design in every idle moment of the day were quite enjoyable. GCs are very hard to get right, but it’s very satisfying when you come up with a design that works! I’m also quite happy at how well Rust helped in making a safe interface.

I encourage everyone to try it out and/or find holes in our design. Contributions of all kind welcome, we’d especially love performance improvements and testcases.

Discuss: HN, Reddit


  1. I’m very skeptical that it’s possible to make either of these completely sound without writing lints which effectively rewrite a large chunk of the compiler

  2. There is a case to be made for an immutable GC which allows some form of deferred initialization of GC fields, however.

David HumphreyIntroducing a New Thimble and Bramble

Introduction

This week we're shipping something really cool with Mozilla, and I wanted to pause and tell you about what it is, and how it works.

The tldr; is that we took the Mozilla Foundation's existing web code editor, Thimble, and rewrote it to use Bramble, our forked version of the Brackets editor, which runs in modern web browsers. You can try it now at https://thimble.mozilla.org/

If you're the type who prefers animated pictures to words, I made you a bunch over on the wiki, showing what a few of the features look like in action. You can also check out Luke's great intro video.

If you're the type who likes words, the rest of this is for you.

Why?

I started working on this project two years ago. While at MozFest 2013 I wrote about an idea I had for a new concept app that merged Thimble and Brackets; at the time I called it Nimble.

I was interested in merging these two apps for a number of reasons. First, I wanted to eliminate the "ceiling" users had when using Thimble, wherein they would graduate beyond its abilities, and be forced to use other tools. In my view, Thimble should be able to grow and expand along with a learner's abilities, and a teacher's needs.

Second, people were asking for lots of new features in Thimble, and I knew from experience that the best code is code you don't have to write. I wanted to leverage the hard work of an existing community that was already focused on building a great web coding platform. Writing a coding environment is a huge challenge, and our team wasn't equipped to take it on by ourselves. Thankfully the Brackets project had already solved this.

On Brackets

Brackets was an easy codebase to get started on, and the community was encouraging and willing to help us with patches, reviews, and questions (I'm especially thankful for @randyedmunds and @busykai).

Brackets is written in an AMD module system, and uses requirejs, react, CodeMirror, LESS, jQuery, Bootstrap, loadash, acorn, tern, etc. One of the things I've loved most about working with the Brackets source is that it uses so much of the the best of the open web. It's ~1.3 million lines of code offer APIs for things things like:

  • code hinting, static analysis, and linting
  • language parsing and tokenizing (html, css, js, xml, less)
  • file system operations
  • editors
  • live DOM diff'ing and in-browser preview
  • swappable servers
  • layout, widgets, and dialogs
  • localization, theming, and preferences
  • extension loading at runtime, with hundreds already written

In short, Brackets isn't an editor so much as a rich platform for coding and designing front-end web pages and apps. Bracket's killer feature is its ability to render a live preview of what's in your editor, including dynamic updates as you type, often without needing to save. The preview even has an awareness of changes to linked files (e.g., external stylesheets and scripts).

Another thing I loved was that Brackets wasn't trying to solve code editing in general: they had a very clear mandate that favoured web development, and front-end web development in particular. HTML, CSS, and JavaScript get elevated status in Brackets, and don't have to fight with every other language for features.

All of these philosophies and features melded perfectly with our goal of making a great learning and teaching tool for web programming.

But what about X?

Obviously there are a ton of code editing tools available. If we start with desktop editors, there are a lot to choose from; but they all suffer from the same problem: you have to download 10s of megs of installer, and then you have to install them, along with a web server, in order to preview your work. Consider what's involved in installing each of these (on OS X):

Thimble, on the other hand, is ~1M (877K for Bramble, the rest for the front-end app). We worked extremely hard to get Brackets (38.5M if you install it) down to something that fits in the size of an average web page. If we changed how Brackets loads more significantly, we could get it smaller yet, but we've chosen to keep existing extensions working. The best part is that there is no install: the level of commitment for a user is the URL.

In addition to desktop editors, there are plenty of popular online options, too:

The list goes on. They are all great, and I use, and recommend them all. Each of these tools has a particular focus, and none of them do exactly what the new Thimble does; specifically, none of them tries to deal with trees of files and folders. We don't need to do what these other tools do, because they already do it well. Instead, we focused on making it possible for users to create a rich and realistic environment for working with arbitrary web site/app structures without needing to install a and run a web server.

From localhost to nohost

I've always been inspired by @jswalden's httpd.js. It was written back before there was node.js, back in a time when it wasn't yet common knowledge that you could do anything in JS. The very first time I saw it I knew that I wanted to find some excuse to make a web server in the browser. With nohost, our in-browser web server, we've done it.

In order to run in a browser, Bramble has to be more than just a code editor; it also has to include a bunch of stuff that would normally be provided by the Brackets Shell (similar to Electron.io) and node.js. This means providing a:

  • web server
  • web browser
  • filesystem

and glue to connect those three. Bracket's uses Chrome's remote debugging protocol and node.js to talk between the editor, browser, and server. This works well, but ties it directly to Chrome.

At first I wasn't sure how we'd deal with this. But then an experimental implementation of the Bracket's LiveDevelopment code landed, which switched away from using Chrome and the remote dev tools protocol to any browser and a WebSocket. Then, in the middle of the docs, we found an offhand comment that someone could probably rewrite it to use an iframe and postMessage...a fantastic idea! So we did.

Making it possible for an arbitrary web site to work in a browser-based environment is a little like Firefox's Save Page... feature. You can't just deal with the HTML alone--you also have to get all the linked assets.

Consider an example web page:

<!DOCTYPE html>  
<html>  
  <head>
   <meta charset="utf-8">
    <title>Example Page</title>
    <link rel="stylesheet”
          href="styles/style.css">
  </head>
  <body>
    <img src="images/cat.png">
    <script src="script.js"></script>
    <script>
      // Call function f in script.js
      f();
    </script>
  </body>
</html>  

In this basic web page we have three external resources referenced by URL. The browser needs to be able to request styles/style.css, images/cat.png, and script.js in order to fully render this page. And we're not done yet.

The stylesheet might also reference other stylesheets using @import, or might use other images (e.g., background-image: url(...)).

It gets worse. The script might need to XHR a JSON file from the server in order to do whatever f() requires.

Bramble tries hard to deal with these situations through a combination of static and dynamic rewriting of the URLs. Eventually, if/when all browsers ship it, we could do a lot of this with ServiceWorkers. Until then, we made do with what we already have cross browser.

First, Bramble's nohost server recursively rewrites the HTML, and its linked resources, in order to find relative filesystem paths (images/cat.png) and replace them with Blobs and URL objects that point to cached memory resources read out of the browser filesystem.

Parsing HTML with regex is a non-starter. Luckily browsers have a full parser built in, DOMParser. Once we have an in memory DOM vs. HTML text string, we can accurately querySelectorAll to find things that might contain URLs (img, link, video, iframe, etc., avoiding a due to circular references) and swap those for generated Blob URLs from the filesystem. When we're done, we can extract rewritten HTML text from our live in-memory DOM via documentElement.outerHTML, obtaining something like this:

<!DOCTYPE html>  
<html>  
  <head>
   <meta charset="utf-8">
    <title>Example Page</title>
    <link rel="stylesheet”
          href="blob:https%3A//mozillathimblelivepreview.net/346526f5-3c14-4073-b667-997324a5bfa9">
  </head>
  <body>
    <img src="blob:https%3A//mozillathimblelivepreview.net/ab090911-9ec1-499c-a9fc-7fce180704f7">
    <script src="blob:https%3A//mozillathimblelivepreview.net/264a3524-5316-47e5-a835-451e78247678"></script>
    <script>
      // Call function f in script.js
      f();
    </script>
  </body>
</html>  

All external resources now use URLs to cached memory resources. This HTML can then be itself turned into a Blob and URL object, and used as the src for our iframe browser (this works everywhere except IE, where you have to document.write the HTML, but can use Blob URLs for everything else).

For CSS we do use regex, looking for url(...) and other places where URLs can lurk. Thankfully there aren't a lot, and it's just a matter of reading the necessary resources from disk, caching to a Blob URL, and replacing the filesystem paths for URLs, before generating a CSS Blob URL that can be used in the HTML.

Despite what everyone tells you about the DOM being slow, the process is really fast. And because we own the filesystem layer, whenever the editor does something like a writeFile(), we can pre-generate a URL for the resource, and maintain a cache of such URLs keyed on filesystem paths for when we need to get them again in the future during a rewrite step. Using this cache we are able to live refresh the browser quite often without causing any noticeable slowdown on the main thread.

As an aside, it would be so nice if we could move the whole thing to a worker and be able to send an HTML string, and get back a URL. Workers can already access IndexedDB, so we could read from the filesystem there, too. This would mean having access to DOMParser (even if we can't touch the main DOM from a worker, being able to parse HTML is still incredibly useful for rewriting, diff'ing, etc).

Finally, we do dynamic substitutions of relative paths for generated Blob URLs at runtime by hijacking XMLHttpRequest and using our postMessage link from the iframe to the editor in order to return response data for a given filename.

And it all works! Sure, there's lots of things we won't ever be able to cope with, from synchronous XHR to various types of DOM manipulation by scripts that reference URLs as strings. But for the general case, it works remarkably well. Try downloading and dragging a zipped web site template from http://html5up.net/ into the editor. Bramble doesn't claim to be able to replace a full, local development environment for every use case; however, it makes it unnecessary in most common cases. It's amazing what the modern web can do via storage, file, drag-and-drop, parser, and worker APIs.

Origin Sandboxing

I talk about Thimble and Bramble as different things, and they are, especially at runtime. Bramble is an embeddable widget with an iframe API, and Thimble hosts it and provides some UI for common operations.

I've put a simple demo of the the Bramble API online for people to try (source is here). Bramble uses, but doesn't own its filesystem; nor does it have any notion of where the files came from or where they are going. It also doesn't have opinions about how the filesystem should be laid out.

This is all done intentionally so that we can isolate the editor and preview from the hosting app, running each on a different domain. We want users to be able to write arbitrary code, execute and store it; but we don't want to mix code for the hosting app and the editor/preview. The hosting app needs to decide on a filesystem layout, get and write the files, and then "boot" Bramble.

I've written previously about how we use MessageChannel to remotely host an IndexedDB backed filesystem in a remote window running on another domain: Thimble owns the filesystem and database and responds to proxied requests to do things via postMessage.

In the case of Thimble, we store data in a Heroku app using postgres on the server. Thimble listens for filesystem events, and then queues and executes file update requests over the network to sync the data upstream. Published projects are written to S3, and we then serve them on a secure domain. Because users can upload files to their filesystem in the editor, it makes it easier to transition to an https:// only web.

When the user starts Thimble, we request a project as a gzipped tarball from the publishing server, then unpack it in a Worker and recreate the filesystem locally. Bramble then "mounts" this local folder and begins working with the local files and folders, with no knowledge of the servers (all data is autosaved, and survives refreshes).

Conclusion

Now that we've got the major pieces in place, I'm interested to see what people will do with both Thimble and Bramble. Because we're in a full browser vs. an "almost-browser" shell, we have access to all the latest toys (for example, WebRTC and the camera). Down the road we could use this for some amazing pair programming setups, so learners and mentors could work with each other directly over the web on the same project.

We can also do interesting things with different storage providers. It would be just as easy to have Bramble talk to Github, Dropbox, or some other cloud storage provider. We intentionally kept Thimble and Bramble separate in order to allow different directions in the future.

Then there's all the possibilities that custom extensions opens up (did I mention that Bramble has dynamic extension loading? because it does!). I'd love to see us use bundles of extensions to enable different sorts of learning activities, student levels, and instructional modes. I'm also really excited to see what kind of new curriculum people will build using all of this.

In the meantime, please try things out, file bugs, chat with us on irc #thimble on moznet and have fun making something cool with just your browser. Even better, teach someone how to do it.

Let me close by giving a big shout out to the amazing students (current and former) who hacked on this with me. You should hire them: Gideon Thomas, Kieran Sedgwick, Kenny Nguyen, Jordan Theriault, Andrew Benner, Klever Loza Vega, Ali Al Dallal, Yoav Gurevich, as well as the following top notch Mozilla folks, who have been amazing to us: Hannah Kane, Luke Pacholski, Pomax, Cassie McDaniel, Ashley Williams, Jon Buckley, and others.

This Week In RustThis Week in Rust 94

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

From the Blogosphere

New Releases & Project Updates

  • Rust web framework comparison. A comparison of some web frameworks written in Rust.
  • eco. A tool for reasoning about breaking changes in Rust ecosystems.
  • mioco. Scalable, asynchronous IO coroutine-based handling for Rust (aka MIO COroutines).
  • cargo-vendor. Cargo subcommand to vendor crates.io dependencies.
  • regex-dfa. Proof of concept for fast regexes in Rust (a regex -> DFA compiler).
  • rust-gc. Simple tracing (mark and sweep) garbage collector for Rust.
  • torch. Generate CPU FlameGraphs based on DWARF Debug Info.
  • libwebpki. Web PKI Certificate Validation in Rust.
  • sokoban-rs. An implementation of Sokoban in Rust.
  • urdict. Command line client for Urban Dictionary.
  • power-assert-rs. Power Assert in Rust. Provides better assertion message.
  • rust-passert. Pretty/Power assertion macros for Rust.
  • colerr. Wrap a given process and colorize it's standard error output.
  • minesweeper-rs. Simple minesweeper in Rust.

What's cooking on nightly?

104 pull requests were merged in the last week.

New Contributors

  • Adam Crume
  • Artem Shitov
  • Cesar Eduardo Barros
  • Daan Rijks
  • Jake Shadle
  • Matěj Grabovský
  • Michael Choate
  • Nikolay Kondratyev
  • Overmind JIANG
  • Tim JIANG

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week. Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

"And God said, Noah you must transport these animals across a large body of water... but they are not Send. And Noah replied, I shall build a great Arc!"durka42 on #rust

Thanks to tomprogrammer for the tip. Submit your quotes for next week!.

Wladimir PalantWhy you probably want to disable jQuery.parseHTML even though you don't call it

TL;DR: jQuery.parseHTML is a security hazard and will be called implicitly in a number of obvious and not so obvious situations.

Why should you care?

Hey, jQuery is great! It’s so great that Stack Overflow users will recommend it no matter what your question is. And now they have two problems. Just kidding, they will have the incredible power of jQuery:

$("#list").append('<li title="' + item.info + '">' + item.name + '</li>');

The above is locating a list in the document, creating a new list item with dynamic content and adding it to the list — all that in a single line that will still stay below the 80 columns limit. And we didn’t even lose readability in the process.

Life is great until some fool comes along and mumbles “security” (yeah, that’s me). Can you tell whether the code above is safe to be used in a web application? Right, it depends on the context. Passing HTML code to jQuery.append will use the infamous innerHTML property implicitly. If you aren’t careful with the HTML code you are passing there, this line might easily turn into a Cross-Site Scripting (XSS) vulnerability.

Does item.name or item.info contain data from untrusted sources? Answering that question might be complicated. You need to trace the data back to its source, decide who should be trusted (admin user? localizer?) and make sure you didn’t forget any code paths. And even if you do all that, some other developer (or maybe even yourself a few months from now) might come along and add another code path where item.name is no longer trusted. Do you want to bet on this person realizing that they are making an entirely different piece of code insecure?

It’s generally better to give jQuery structured data and avoid taking any chances. The secure equivalent of the code above would be:

$("#list").append($("<li>", {title: item.info}).text(item.name));

Not quite as elegant any more but now jQuery will take care of producing a correct HTML structure and you don’t need to worry about that.

Wait, there is more!

There is one remarkable thing about jQuery APIs: each function can take all kinds of parameters. For example, the .append() function we used above can take a DOM element, a CSS selector, HTML code or a function returning any of the above. This keeps function names short, and you only need to remember one function name instead of four.

The side effect is however: even if you are not giving jQuery any HTML code, you still have to keep in mind that the function could accept HTML code. Consider the following code for example:

$(tagname + " > .temporary").remove();

This will look for elements of class temporary within a given tag and remove them, right? Except that the content of tagname better be trusted here. What will happen if an attacker manages to set the value of tagname to "<img src='dummy' onerror='alert(/xss/)'>"? You probably guessed it, the “selector” will be interpreted as HTML code and will execute arbitrary JavaScript code.

There is more than a dozen jQuery functions that will happily accept both selectors and HTML code. Starting with jQuery 1.9.0 security issues here got somewhat less likely, the string has to start with < in order to be interpreted as HTML code. Older versions will accept anything as HTML code as long as it doesn’t contain #, the versions before jQuery 1.6.1 didn’t even have that restriction.

To sum up: you better use jQuery 1.9.0 or above, otherwise your dynamically generated selector might easily end up being interpreted as an HTML string. And even with recent jQuery versions you should be careful with dynamic selectors, the first part of the selector should always be a static string to avoid security issues.

Defusing jQuery

With almost all of the core jQuery functionality potentially problematic, evaluating security of jQuery-based code is tricky. Ideally, one would simply disable unsafe functionality so that parsing HTML code by accident would no longer be possible. Unfortunately, there doesn’t seem to be a supported way yet. The approach I describe here seems to work in the current jQuery versions (jQuery 1.11.3 and jQuery 2.1.4) but might not prevent all potential issues in older or future jQuery releases. Use at your own risk! Oh, and feel free to nag jQuery developers into providing supported functionality for this.

There is a comment in the source code indicating that jQuery.parseHTML function being missing is an expected situation. However, removing this function doesn’t resolve all the issues, and it disables safe functionality as well. Removing jQuery.buildFragment on the other hand doesn’t seem to have any downsides:

delete jQuery.buildFragment;

// Safe element creation still works
$('<img>', {src: "dummy"});

// Explicitly assigning or loading HTML code for an element works
$(document.body).html('<img src="dummy">');
$(document.body).load(url);

// These will throw an exception however
$('<img src="dummy">');
$(document.body).append('<img src="dummy">');
$.parseHTML('<img src="dummy">');

Of course, you have to adjust all your code first before you disable this part of the jQuery functionality. And even then you might have jQuery plugins that will stop working with this change. There are some code paths in the jQuery UI library for example that rely on parsing non-trivial HTML code. So this approach might not work for you.

But how do I create larger DOM structures?

The example creating a single list item is nice of course but what if you have to create some complicated structure? Doing this via dozens of nested function calls is impractical and will result in unreadable code.

One approach would be placing this structure in your HTML document, albeit hidden. Then you would need to merely clone it and fill in the data:

<style type="text/css">
  #entryTemplate { display: none; }
</style>

<div id="entryTemplate">
  <div class="title"></div>
  <div class="description"></div>
</div>

<script>
  var template = $("#entryTemplate");
  var entry = template.clone().removeAttr("id");
  entry.find(".title").text(item.title);
  entry.find(".description").text(item.description);
  $(document.body).append(entry);
</script>

Other templating approaches for JavaScript exist as well of course. It doesn’t matter which one you use as long as you don’t generate HTML code on the fly.

Christian HeilmannQuickie: Fading in a newly created element using CSS

Update: I got an email from James at VIDesignz who found another solution to this problem using the :empty selector. I added it at the end of the article.

As part of our JSFoo workshop today I was asked to look into an issue a team had that you can not apply a CSS transition to a newly created element when you change the CSS settings in JavaScript. As I was dealing with professionals, they created a simple JSFiddle to show the problem:

As you see just changing the property (in this case the opacity) is not enough to have a transition. There are a few solutions to this shown in the Fiddle, too, like forcing a reflow, which of course could be a terrible idea.

I played with this and found the solution to be to not change the properties in JavaScript (which is kind of dirty anyways) but leave it all to CSS instead. The biggest part of the solution is not to use a transition but an animation instead and trigger it by applying a class to the newly created element right after adding it to the DOM:

Update: As mentioned by Kyle Simpson on Twitter, there is a way to do the same with a transition, but you need to wrap the creation and applying the class into requestAnimationFrame calls which in turn means some polyfilling:

Update2 You can also use the :empty selector in CSS to achieve the same when you add the new element as a child:

Aaron KlotzOn WebExtensions

There has been enough that has been said over the past week about WebExtensions that I wasn’t sure if I wanted to write this post. As usual, I can’t seem to help myself. Note the usual disclaimer that this is my personal opinion. Further note that I have no involvement with WebExtensions at this time, so I write this from the point of view of an observer.

API? What API?

I shall begin with the proposition that the legacy, non-jetpack environment for addons is not an API. As ridiculous as some readers might consider this to be, please humour me for a moment.

Let us go back to the acronym, “API.” Application Programming Interface. While the usage of the term “API” seems to have expanded over the years to encompass just about any type of interface whatsoever, I’d like to explore the first letter of that acronym: Application.

An Application Programming Interface is a specific type of interface that is exposed for the purposes of building applications. It typically provides a formal abstraction layer that isolates applications from the implementation details behind the lower tier(s) in the software stack. In the case of web browsers, I suggest that there are two distinct types of applications: web content, and extensions.

There is obviously a very well defined API for web content. On the other hand, I would argue that Gecko’s legacy addon environment is not an API at all! From the point of view of an extension, there is no abstraction, limited formality, and not necessarily an intention to be used by applications.

An extension is imported into Firefox with full privileges and can access whatever it wants. Does it have access to interfaces? Yes, but are those interfaces intended for applications? Some are, but many are not. The environment that Gecko currently provides for legacy addons is analagous to an operating system running every single application in kernel mode. Is that powerful? Absolutely! Is that the best thing to do for maintainability and robustness? Absolutely not!

Somewhere a line needs to be drawn to demarcate this abstraction layer and improve Gecko developers’ ability to make improvements under the hood. Last week’s announcement was an invitation to addon developers to help shape that future. Please participate and please do so constructively!

WebExtensions are not Chrome Extensions

When I first heard rumors about WebExtensions in Whistler, my source made it very clear to me that the WebExtensions initiative is not about making Chrome extensions run in Firefox. In fact, I am quite disappointed with some of the press coverage that seems to completely miss this point.

Yes, WebExtensions will be implementing some APIs to be source compatible with Chrome. That makes it easier to port a Chrome extension, but porting will still be necessary. I like the Venn Diagram concept that the WebExtensions FAQ uses: Some Chrome APIs will not be available in WebExtensions. On the other hand, WebExtensions will be providing APIs above and beyond the Chrome API set that will maintain Firefox’s legacy of extensibility.

Please try not to think of this project as Mozilla taking functionality away. In general I think it is safe to think of this as an opportunity to move that same functionality to a mechanism that is more formal and abstract.

François MarierLetting someone ssh into your laptop using Pagekite

In order to investigate a bug I was running into, I recently had to give my colleague ssh access to my laptop behind a firewall. The easiest way I found to do this was to create an account for him on my laptop and setup a pagekite frontend on my Linode server and a pagekite backend on my laptop.

Frontend setup

Setting up my Linode server in order to make the ssh service accessible and proxy the traffic to my laptop was fairly straightforward.

First, I had to install the pagekite package (already in Debian and Ubuntu) and open up a port on my firewall by adding the following to both /etc/network/iptables.up.rules and /etc/network/ip6tables.up.rules:

-A INPUT -p tcp --dport 10022 -j ACCEPT

Then I created a new CNAME for my server in DNS:

pagekite.fmarier.org.   3600    IN  CNAME   fmarier.org.

With that in place, I started the pagekite frontend using this command:

pagekite --clean --isfrontend --rawports=virtual --ports=10022 --domain=raw:pagekite.fmarier.org:Password1

Backend setup

After installing the pagekite and openssh-server packages on my laptop and creating a new user account:

adduser roc

I used this command to connect my laptop to the pagekite frontend:

pagekite --clean --frontend=pagekite.fmarier.org:10022 --service_on=raw/22:pagekite.fmarier.org:localhost:22:Password1

Client setup

Finally, my colleague needed to add the folowing entry to ~/.ssh/config:

Host pagekite.fmarier.org
  CheckHostIP no
  ProxyCommand /bin/nc -X connect -x %h:10022 %h %p

and install the netcat-openbsd package since other versions of netcat don't work.

On Fedora, we used netcat-openbsd-1.89 successfully, but this newer package may also work.

He was then able to ssh into my laptop via ssh roc@pagekite.fmarier.org.

Making settings permanent

I was quite happy settings things up temporarily on the command-line, but it's also possible to persist these settings and to make both the pagekite frontend and backend start up automatically at boot. See the documentation for how to do this on Debian and Fedora.

Cameron Kaiser38.2.1 is available

TenFourFox 38.2.1 is available (release notes, hashes, downloads). Due to the fact this is a chemspill and we're already delayed, it will become live by this evening. Other than the Mozilla fixes, issue 306 is also repaired. Further work on 38's MP3 support is being deferred until the replacement hard disk arrives (should be early next week).

Don't forget to test 38.2.1 with incremental GC disabled. See the previous post. Enjoy our new sexy wiki, too. Sexy. Yes.

Emma IrwinParticipation Leadership Framework 0.1

In the last heartbeat, as part of our Q3 goals for leadership development,  I interviewed a diverse set of people across Mozilla, asking what they think the skills, knowledge and attitudes of effective Participation Leadership at Mozilla are.  Two things really stood out during this process.  The first was how many people (staff, contributors and alumni) are truly, truly dedicated to the success of each other and Mozilla’s mission, which was really inspiring and helped inform the quality of this Framework. The second was how many opportunities and resources already exist (or are being created) for leadership development, that if bundled together,  with more specifically targeted curriculum and focused outcomes will provide powerful learning by Participating experiences.

This Heartbeat iterated on themes that emerged during those interviews.  I thank those who provided feedback on Discourse, and in Github, all  of which brought us to this  first  0.1 version.

2015-08-28_1244

Foundations of Participation Leadership are the  core skills, knowledge and attitudes that lend to success on both personal goals, and goals for Participation at Mozilla.

Building Blocks of Participation Leadership are units of learning, that together provide a whole vision for leadership, but individually build skills, attitude and knowledge that inform specific learning outcomes as needed.

Examples of skills, leadership and knowledge for each:

Personal Leadership

  • Accountability
  • Decision Making
  • Introspective, Authentic Leadership
  • “My Leadership Identity at Mozilla”

Essential Mozilla

  • Mozilla’s Timeline & History
  • Advocacy
  • Mozilla’s Mission
  • “Why Mozilla, Why Now, Why Me?”

Building for Action and Impact

  • Community Building
  • Agile for Participation Projects
  • Designing with Participation Hooks & Triggers
  • Building Bridges to Mozilla

Empowering Teams and People

  • Uncovering Unconscious Bias
  • Mentoring & Finding Mentorship
  • Teach & Facilitate Like Mozilla
  • Distributed Leadership

Working Open

  • Open Practices
  • Writing in the Open
  • Sharing & Licensing
  • Activism in the Open

Developing Specialization

  • Creating Automated Tests for Firefox OS
  • Inviting Skilled Participation to Fennec
  • Web Literacy Leadership

We would love your comments, suggestions and ideas on where we are so far. In the next heartbeat we’ll begin building and running workshops with these as guide, and further iterating towards 1.0.

 


Image Credit: Lead Type by jm3

Stuart ColvillePython: testing beyond exceptions

Recently working on some code I was doing some basic tests that checked for an exception being raised.

The tests looked like this:

def test_user_identification_is_valid_option(self):  
   with self.assertRaises(ValueError):
       example_seller(products=[
           'id': 'hai',
           'description': 'a description',
           'recurrence': None,
           'user_identification': True,
       }])


def test_user_identification_is_required(self):  
     with self.assertRaises(ValueError):
         example_seller(products=[{
             'id': 'hai',
             'description': 'a description',
             'recurrence': None,
         }])

I realised that it was really easy to be tricked by other parts of the code that might raise the same exception. For example, in this case the required code is raising ValueError separately.

In JavaScript using assertions in chai, assert.throws takes a function that should throw an exception, the 2nd arg is the exception instance you're expecting and the third arg can be a regex to match the string passed to the exception.

I wanted to do the same thing in Python.

assertRaisesRegexp

In Python (2.7+) assertRaisesRegexp does exactly what we want so we can fix our tests to be more explicit like so:

def test_user_identification_is_valid_option(self):  
    with self.assertRaisesRegexp(ValueError, 'user_identification must'):
       example_seller(products=[
           'id': 'hai',
           'description': 'a description',
           'recurrence': None,
           'user_identification': True,
       }])


def test_user_identification_is_required(self):  
     with self.assertRaisesRegexp(ValueError, 'Missing user_ident'):
         example_seller(products=[{
             'id': 'hai',
             'description': 'a description',
             'recurrence': None,
         }])

The second argument can either be a a regular expression object or a string containing a regular expression suitable for use by re.search().

With this in place we can be sure our tests are asserting on the actual part of the code that raises the exception not just asserting that any old ValueError is good enough.

Mozilla Addons BlogAMO T-shirt Update

Just want to give a quick update on the snazzy t-shirts designed by Erick León Bolinaga. They were finally done printing this week, and are headed to a fulfillment center for shipping. We expect them to begin shipping by the end of next week.

Thanks for your patience!

Dan MinorMozReview Montréal Work Week

pigeon

Under the watchful gaze (and gentle cooing) of the pigeons, the MozReview developers gathered in Montréal for a work week. My main goal for the week was to make substantial progress towards Autoland to Inbound, my primary project for this quarter, maybe even a deployment of an initial iteration to the development server.

While we didn’t get quite that far, we did make a lot of progress on a lot of fronts, including finally getting Bugzilla API key support deployed. This is the first work week I’ve done where we just stayed and worked together in an AirBNB apartment rather than get hotel rooms and make use of shared space in a Mozilla office. I really enjoyed this, it was a nice casual work environment and we got a lot of focused work done.

Some things I worked on this week, in varying degrees of completion:

  • Bug 1198086 This adds an endpoint for making Autoland to “non-Try” tree requests which will allow us to build the UI for Autoland to Inbound. A while back I fixed Bug 1183295 which added support for non-Try destinations in the Autoland service itself. This means, outside of bug fixes, the backend support for Autoland to Inbound is implemented and we can focus on the UI.

  • Bug 1196263 is my other main project this quarter. We want to add a library which enables people to write their own static analysis bots that run against MozReview. This is based on work that GPS did in the winter to create a static analysis bot for Python. We still need to rework some of the messages we’re sending out on Pulse when a review is published, at the moment we’ll end up re-reviewing unchanged commits and spamming lots of comments. This was a problem with the original Python bot and needs to be fixed before bots can be enabled.

  • Bug 1168486 involves creating a “Custom Hosting Service” for review repositories. This will let us maintain metadata about things like whether or not a repo has an associated Try repository so we can disable triggering try runs on reviews where this doesn’t make sense.

  • Bug 1123139 is a small UI fix to remove unnecessary information from the Description field. We’ve decided to reserve the Description field for displaying the Mercurial commit message which will hopefully encourage people to write more descriptive messages for their changes. This will also move the “pull down these commits” hint to the Information section on the right of the page. Like most small UI fixes, this consumed an embarrassing amount of time. I’ve come to realize that no matter how many bad UIs I leave under my pillow at night, the UI fairy will not come and fix them, so I’ll just have to get better at this sort of thing.

Patrick ClokeBoston Python: Twisted async networking framework

Yesterday, Stephen DiCato and I gave a talk for Boston Python titled: Twisted async networking framework. It was an introduction to intermediate level talk about using the Twisted networking framework based on our experiences at Percipient Networks.

The talk, available on our GitHub (PDF) covered a few basic topics:

  1. What is asynchronous programming?
  2. What is Twisted?
  3. When/why to use Twisted?
  4. What is the event loop (reactor)?
  5. What are Deferreds and how do you use them?
  6. What are protocols (and related objects) and how do you use them?

Additionally there was a ‘bonus’ section: Using Twisted to build systems & services.

We used an example of a very simple chat server (NetCatChat: where the official client is netcat) to demonstrate these principles. All of our (working!) demo code is included in the repository.

There was a great turn out (almost 100 people showed up) and I greatly enjoyed the experience. Thanks to everyone who came, the sponsors for the night, Boston Python for setting this up, and Stephen for co-presenting! Please let us know if you have any questions or comments.

Rubén MartínBringing better support to regional communities

During this third quarter, one of the main goals for the Participation team at Mozilla is to better support Reps and Regional communities.

We want to focus our efforts this quarter in 10 countries to be more efficient with the resources we have and be able to:

  • Tailor country profiles and a community health dashboard.
  • Develop a mid-term plan with at least three communities.
  • Systematize a coaching framework with volunteers.

As part of the Reps/Regional group I’m currently involved in these efforts, focusing in three European countries: Germany, France and UK.

During the past and following weeks I’ll be meeting volunteers from these communities to know more about them and to figure out where to get some information that would help to develop the country profiles and the community dashboard, an important initiative to have a clear overview about our community status.

Also, I’m working with the awesome German community to meet and work together in a plan to align and improve the community in the next 6 months.

On top of all the previous things, we are starting a set of 1:1 meetings with key volunteers inside these communities to bring coaching and support in a more personal way, understanding everyone’s views and learning the best ways to help people’s skills and motivation.

Finally, I’m working to improve the Reps/Regional team accountability and work-flow productivity exploring better ways to manage our work as a team and working with the Reps Council to put together a Rep program profile doc to understand better the current status and what should be changed/improved.

You can know more about the Participation team Q3 goals and key results, as well as individual team members goals, in this public document and follow our daily work in our github page.

Emily DunhamApache Licenses

Apache Licenses

At the bottom fo the Apache 2.0 License file, there’s an appendix:

APPENDIX: How to apply the Apache License to your work.

...

Copyright [yyyy] [name of copyright owner]

...

Does that look like an invitation to fill in the blanks to you? It sure does to me, and has for others in the Rust community as well.

Today I was doing some licensing housekeeping and made the same embarrassing mistake.

This is a PSA to double check whether inviting blanks are part of the appendix before filling them out in Apache license texts.

Christian HeilmannES6 for now: Template strings

ES6 is the future of JavaScript and it is already here. It is a finished specification, and it brings a lot of features a language requires to stay competitive with the needs of the web of now. Not everything in ES6 is for you and in this little series of posts I will show features that are very handy and already usable.

If you look at JavaScript code I’ve written you will find that I always use single quotes to define strings instead of double quotes. JavaScript is OK with either, the following two examples do exactly the same thing:

var animal = "cow";
 
var animal = 'cow';

The reason why I prefer single quotes is that, first of all, it makes it easier to assemble HTML strings with properly quoted attributes that way:

// with single quotes, there's no need to 
// escape the quotes around the class value
var but = '<button class="big">Save</button>';
 
// this is a syntax error:
var but = "<button class="big">Save</button>";
 
// this works:
var but = "<button class=\"big\">Save</button>";

The only time you need to escape now is when you use a single quote in your HTML, which should be a very rare occasion. The only thing I can think of is inline JavaScript or CSS, which means you are very likely to do something shady or desperate to your markup. Even in your texts, you are probably better off to not use a single quote but the typographically more pleasing ‘.

Aside: Of course, HTML is forgiving enough to omit the quotes or to use single quotes around an attribute, but I prefer to create readable markup for humans rather than relying on the forgiveness of a parser. We made the HTML5 parser forgiving because people wrote terrible markup in the past, not as an excuse to keep doing so.

I’ve suffered enough in the DHTML days of document.write to create a document inside a frameset in a new popup window and other abominations to not want to use the escape character ever again. At times, we needed triple ones, and that was even before we had colour coding in our editors. It was a mess.

Expression substition in strings?

Another reason why I prefer single quotes is that I wrote a lot of PHP in my time for very large web sites where performance mattered a lot. In PHP, there is a difference between single and double quotes. Single quoted strings don’t have any substitution in them, double quoted ones have. That meant back in the days of PHP 3 and 4 that using single quotes was much faster as the parser doesn’t have to go through the string to substitute values. Here is an example what that means:

<?php
  $animal = 'cow';
  $sound = 'moo';
 
  echo 'The animal is $animal and its sound is $sound';
  // => The animal is $animal and its sound is $sound
 
  echo "The animal is $animal and its sound is $sound";
  // => The animal is cow and its sound is moo
?>

JavaScript didn’t have this substitution, which is why we had to concatenate strings to achieve the same result. This is pretty unwieldy, as you need to jump in and out of quotes all the time.

var animal = 'cow';
var sound = 'moo';
 
alert('The animal is ' + animal + ' and its sound is ' +
 sound);
// => "The animal is cow and its sound is moo"

Multi line mess

This gets really messy with longer and more complex strings and especially when we assemble a lot of HTML. And, most likely you will sooner or later end up with your linting tool complaining about trailing whitespace after a + at the end of a line. This is based on the issue that JavaScript has no multi-line strings:

 
// this doesn't work
var list = '<ul>
  <li>Buy Milk</li>
  <li>Be kind to Pandas</li>
  <li>Forget about Dre</li>
</ul>';
 
// This does, but urgh… 
var list = '<ul>\
  <li>Buy Milk</li>\
  <li>Be kind to Pandas</li>\
  <li>Forget about Dre</li>\
</ul>';
 
// This is the most common way, and urgh, too…
var list = '<ul>' +
'  <li>Buy Milk</li>' +
'  <li>Be kind to Pandas</li>' +
'  <li>Forget about Dre</li>' +
'</ul>';

Client side templating solutions

In order to work around the mess that is string handling and concatenation in JavaScript, we did what we always do – we write a library. There are many HTML templating libraries with Mustache.js probably having been the seminal one. All of these follow an own – non standardised – syntax and work in that frame of mind. It’s a bit like saying that you write your content in markdown and then realising that there are many different ideas of what “markdown” means.

Enter template strings

With the advent of ES6 and its standardisation we now can rejoice as JavaScript has now a new kid on the block when it comes to handling strings: Template Strings. The support of template strings in current browsers is encouraging: Chrome 44+, Firefox 38+, Microsoft Edge and Webkit are all on board. Safari, sadly enough, is not, but it’ll get there.

The genius of template strings is that it uses a new string delimiter, which isn’t in use either in HTML nor in normal texts: the backtick (`).

Using this one we now have string expression substitution in JavaScript:

var animal = 'cow';
var sound = 'moo';
 
alert(`The animal is ${animal} and its sound is ${sound}`);
// => "The animal is cow and its sound is moo"

The ${} construct can take any JavaScript expression that returns a value, you can for example do calculations, or access properties of an object:

var out = `ten times two totally is ${ 10 * 2 }`;
// => "ten times two totally is 20"
 
var animal = {
  name: 'cow',
  ilk: 'bovine',
  front: 'moo',
  back: 'milk',
}
alert(`
  The ${animal.name} is of the 
  ${animal.ilk} ilk, 
  one end is for the ${animal.front}, 
  the other for the ${animal.back}
`);
// => 
/*
  The cow is of the 
  bovine ilk, 
  one end is for the moo, 
  the other for the milk
*/

That last example also shows you that multi line strings are not an issue at all any longer.

Tagged templates

Another thing you can do with template strings is prepend them with a tag, which is the name of a function that is called and gets the string as a parameter. For example, you could encode the resulting string for URLs without having to resort to the horridly named encodeURIComponent all the time.

function urlify (str) {
  return encodeURIComponent(str);
}
 
urlify `http://beedogs.com`;
// => "http%3A%2F%2Fbeedogs.com"
urlify `woah$£$%£^$"`;
// => "woah%24%C2%A3%24%25%C2%A3%5E%24%22"
 
// nesting also works:
 
var str = `foo ${urlify `&&`} bar`;
// => "foo %26%26 bar"

This works, but relies on implicit array-to-string coercion. The parameter sent to the function is not a string, but an array of strings and values. If used the way I show here, it gets converted to a string for convenience, but the correct way is to access the array members directly.

Retrieving strings and values from a template string

Inside the tag function you can not only get the full string but also its parts.

function tag (strings, values) {
  console.log(strings);
  console.log(values);
  console.log(strings[1]);
}
 
tag `you ${3+4} it`;
/* =>
 
Array [ "you ", " it" ]
7
it
 
*/

There is also an array of the raw strings provided to you, which means that you get all the characters in the string, including control characters. Say for example you add a linebreak with \n you will get the double whitespace in the string, but the \n characters in the raw strings:

function tag (strings, values) {
  console.log(strings);
  console.log(values);
  console.log(strings[1]);
  console.log(string.raw[1]);
}
 
tag `you ${3+4} \nit`;
/* =>
 
Array [ "you ", "  it" ]
7
 
it
 \nit
*/

Conclusion

Template strings are one of those nifty little wins in ES6, that can be used right now. If you have to support older browsers, you can of course transpile your ES6 to ES5, you can do a feature test for template string support using a library like featuretests.io or with the following code:

var templatestrings = false;
try {
  new Function( "`{2+2}`" );
  templatestrings = true;
} catch (err) {
  templatestrings = false;
} 
 
if (templatestrings) {
	// …
}

More articles on template strings:

Cameron Kaiser38.2.1 delayed due to hardware failure

TenFourFox 38.2.1 was supposed to be released to you today but the hard disk used for compiling it blew up sometime yesterday and I've been recovering data from the drive and the last backup instead. The G5 version was built before the disk died, and does check out, but the other three builds haven't been yet. Let this be a reminder that DiskWarrior can fix a lot of things but not hardware failure (and people complaining of random faults in TenFourFox, please check your hardware first -- the symptom here was random freezes because the electronics kept dropping the drive off the SATA bus unexpectedly), so Data Rescue is busy getting the recoverable pieces off it and the rest I can restore from the file server. Both tools belong in your Power Mac bug-out bag and both still support PowerPC, so please support those vendors who still support us. It should be repaired enough to resume builds hopefully late tonight but I don't have an estimated time of release (hopefully no later than Sunday). It includes two Mozilla fixes and will also include a tweak for TenFourFox issue 306.

In the meantime, a fair bit of the wiki has been updated and rewritten for Github. I am also exploring an idea from bug 1160228 by disabling incremental garbage collection entirely. This was a bad idea on 31 where incremental GC was better than nothing, but now that we have generational garbage collection and the nursery is regularly swept, the residual tenured heap seems small enough to make periodic full GCs more efficient. On a tier 1 platform the overhead of lots of incremental cycles may well be below the noise floor, but on the pathological profile in the bug even a relatively modern system had a noticeable difference disabling incremental GC. On this G5 occasionally I get a pause in the browser for 20 seconds or so, but that happens very infrequently, and otherwise now that the browser doesn't have to schedule partial passes it seems much sprightlier and stays so longer. The iBook G4 saw an even greater benefit. Please note that this has not been tested well with multiple compartments or windows, so your mileage may vary, but with that said please see what you think: in about:config set javascript.options.mem.gc_incremental to false and restart the browser to flush everything out. If people generally find this superior, it may become the default in 38.3.

Gervase MarkhamTop 50 DOS Problems Solved: Sorting Directory Listings

Q: Could you tell me if it’s possible to make the DIR command list files in alphabetical order?

A: Earlier versions of DOS didn’t allow this but there’s a way round it. MS-DOS 5 gives you an /ON switch to use with DIR, for instance:

DIR *.TXT /ON /P

would list all the files with names ending in .TXT, pause the listing every screenful (/P) and sort the names into alphabetical order (/ON).

Users of earlier DOS programs can shove the output from DIR through a utility program that sorts the listing before printing it on the screen. That utility is SORT.EXE, supplied with DOS. … [So:]

DIR | SORT

diverts the output from DIR into SORT, which sorts the directory listing and sends it to the screen. Put this in a batch file called SDIR.BAT and you will have a sorted directory command called SDIR.

I guess earlier versions of DIR followed the Unix philosophy of “do one thing”…

Daniel StenbergContent over HTTP/2

cdn77 logoRoughly a week ago, on August 19, cdn77.com announced that they are the “first CDN to publicly offer HTTP/2 support for all customers, without ‘beta’ limitations”. They followed up just hours later with a demo site showing off how HTTP/2 might perform side by side with a HTTP/1.1 example. And yes, the big competitor CDNs are not yet offering HTTP/2 support it seems.

Their demo site initially got critized for not being realistic and for showing HTTP/2 to be way better in comparison that what a real life scenario would be more likely to look like, and it was also subsequently updated fairly quickly. It is useful to compare with the similarly designed previously existing demo sites hosted by Akamai and the Go project.

NGINX logocdn77’s offering is built on nginx’s alpha patch for HTTP/2 that was anounced just two weeks ago. I believe nginx’s full release is still planned to happen by the end of this year.

I’ve talked with cdn77’s Jakub Straka and their lead developer Honza about their HTTP/2 efforts, and since I suspect there are a few others in my audience who’re also similarly curious I’m offering this interview-style posting here, intertwined with my own comments and thoughts. It is not just a big ad for this company, but since they’re early players on this field I figure their view and comments on this are worth reading!

I’ve been in touch with more than one person who’ve expressed their surprise and awe over the fact that they’re using this early patch for nginx to run in production. So I had to ask them about that. Below, their comments are all prefixed with CDN77 and shown using italics.

nginx

CDN77: “Yes, we are running the alpha patch, which is basically a slightly modified SPDY. In the past we have been in touch with the Nginx team and exchanged tips and ideas, the same way we plan to work on the alpha patch in the future.

We’re actually pretty careful when deploying new and potentially unstable packages into production. We have separate servers for http2 and we are monitoring them 24/7 for any exceptions. We also have dedicated developers who debug any issues we are facing with these machines. We would never jeopardize the stability of our current network.

I’m not an expert on neither server-side HTTP/2 nor nginx in particular , but I think I read somewhere that the nginx HTTP/2 patch removes the SPDY support in favor of the new protocol.

CDN77: “You are right. HTTP/2 patch rewrites SPDY into the HTTP/2, so the SPDY is no longer supported after applying the patch. Since we have HTTP/2 running on separate servers, we still have SPDY support on the rest of the network.”

Did the team at cdn77 at all consider using something else than nginx for HTTP/2, like the promising newcomer h2o?

CDN77: “Not at all. Nginx is a clear choice for us. Its architecture and modularity is awesome. It is also very reliable and it has a pretty long history.

On scale

Can you share some of the biggest hurdles you had to overcome to deploy HTTP/2 on this scale with nginx?

CDN77: “Since nobody has tried the patch in such a scale ever before, we had to make sure it will hold even under pressure and needed to create a load heavy testing environment. We used servers from our partner company 10gbps.io and their 10G uplinks to create intensive ghost traffic. Also, it was important to make sure that supporting tools and applications are HTTP/2 ready – not all of them were. We needed to adjust the way we monitor and control servers in few cases.

There are a few bugs in Nginx that appear mainly in association with the longer-lived connections. They cause issues with the application layer and consume more resources. To be able to accommodate HTTP/2 and still keep necessary network redundancies, we needed to upgrade our network significantly.

I read this as an indication that the nginx patch isn’t perfected just yet rather than signifying that http2 is special. Perhaps also that http2 connections might use a larger footprint in nginx than good old http1 connections do.

Jakub mentioned they see average “performance savings” in the order of 20 to 60 percent depending on sites and contents with the switch to h2, but their traffic amounts haven’t been that large yet:

CDN77: “So far, only a fraction of the traffic is running via HTTP/2, but that is understandable since we launched the support few days ago. On the first day, only about 0.45% of the traffic was HTTP/2 and a big part of this was our own demo site. Over the weekend, we saw impressive adoption rate and the total HTTP/2 traffic accounts for more than 0.8% now, all that with the portion of our own traffic in this dropping dramatically. We expect to be pushing around 1.2% – 1.5% of total traffic over HTTP/2 till the end of this week.

Understandably, it is ramping up. Still, Firefox telemetry is showing at least 10% of the total traffic over HTTP/2 already.

Future of HTTPS and HTTP/2?

Whttp2 logohile I’m talking to a CDN operator, I figured I should poll their view on HTTPS going forward! Will the fact that all browsers only support h2 over HTTPS push more of your customers and your traffic in general over to HTTP, you think?

CDN77: “This is not easy to predict. There is encryption overhead, but HTTP/2 comes with header compression and it is binary. So at this single point, the advantages and disadvantages zero out. Also, the use of HTTPS is rising rapidly even on the older protocol, so we don’t consider this an issue at all.

In general, from a CDN perspective and as someone who just deployed this on a fairly large scale, what’s your general perception of what http2 means going forward?

CDN77: “We believe that this is a huge step forward in how we distribute content online and as a CDN company, we are especially excited as it concerns the very core of our business. From the new features, we have great expectations about cache invalidation that is being discussed right now.

Thanks to Jakub, Honza and Tomáš of cdn77 for providing answers and info. We live in exciting times.

Air MozillaIntern Presentations

Intern Presentations Bernardo Rittmeyer Jatin Chhikara Steven Englehardt Gabriel Luong Karim Benhmida Eduoard Oger Jonathan Almeida Huon Wilson Mariusz Kierski Koki Yoshida

Air Mozilla"Firefox Helps You Log In" Presented by Bernardo Rittmeyer

"Firefox Helps You Log In" Presented by Bernardo Rittmeyer Firefox Helps You Log In: Seamless password management for your daily browsing.

Mozilla WebDev CommunityBeer and Tell – August 2015

Once a month, web developers from across the Mozilla Project get together to spend an hour of overtime to honor our white-collar brethren at Amazon. As we watch our productivity fall, we find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.

There’s a wiki page available with a list of the presenters, as well as links to their presentation materials. There’s also a recording available courtesy of Air Mozilla.

openjck: Discord

openjck was up first and shared Discord, a Github webhook that scans pull requests for CSS compatibility issues. When it finds an issue, it leaves a comment on the offending line with a short description and which browsers are affected. The check is powered by doiuse, and projects can add a .doiuse file (using browserslist syntax) that specifies which browser versions they want to be tested against. Discord currently checks CSS and Stylus files.

The Discord team (mrmakeit, groovecoder, davidwalsh, and openjck) is looking for sites to test Discord out. Work on the site is currently suspended (which is why it’s a side project, the team may work on it in their free time) so that feedback can be gathered to determine where the site should go next. If you’re interested in trying out Discord, let groovecoder know!

peterbe: Activity and Fanout.io

Next up was peterbe, with an update to Activity. The site now uses Fanout.io and a message queue to improve how activity items are fetched from GitHub and other sources. The site queues up jobs to fetch data from the Github API, and as the jobs complete, they send their results to Fanout. Fanout’s JavaScript library maintains an open WebSocket with their service, and when Fanout receives the data from the completed jobs, it notifies the client of the new data, which gets written to localStorage and updates the React state. This allows Activity to remain structured as an offline-ready application while still receiving seamless updates if the user has an internet connection.


There’s a donation jar near the exit; for just $25 you can pay for an hour of time for an Amazon engineer to spend with their family. Checks may be made payable to No Questions Asked Laundry, LLC.

If you’re interested in attending the next Beer and Tell, sign up for the dev-webdev@lists.mozilla.org mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!

See you next month!

Matt ThompsonImpact


Video recording of the Aug 26 Mozilla Learning community call

For the Mozilla Learning plan right now, we’re focused on impact. What impact will our advocacy and leadership work will have in the world over the next three years? How do we state that in a way that’s memorable, manageable, measurable and motivational?

How do other orgs do it? As a way to think big and step back, we asked participants in Tuesday’s community call to give examples of organizations or projects that inspire them right now. Here’s our list.

Who inspires you?

  • Free Code Camp — learn to code by helping non-profit organizations (Amira)
  • 18F — kicking ass when it comes to bringing open source to government (Kaitlin)
  • The Inter-Agency Network for Education in Emergencies — cool network and community of practice for 15,000 people teaching in refugee camps and other emergency settings around the world (Surman)
  • The Engine Room — small and scrappy, but doing amazing work with teaching open tools for social change (Michelle)
  • GDS — because they somehow manage to work like MoFo, even though they are part of Government (Adam)
  • Keyboardio — open source mechanical keyboard with a wonderful backlight, shipped with a screwdriver so that you can tinker around and reprogram.  (Shreyas)

  • Born Accessible — thinking about web content as “born accessible.”(Emma)
  • WikiSpeed — a non-profit that’s building open source, energy-efficient cars in 17 countries,  with no org chart or management structure (@OpenMatt)
  • NESTA — engaged in some interesting thought leadership that relates well to our work (Sam)

  • Ocean Cleanup — addressing “The Great Pacific Garbage Patch” with business / philanthropy / sponsorship / science / data / youth vision all coming together to stem it (Rebecca)
  • Conservation International — I’m digging their current campaign: “Nature doesn’t need people, people need nature” (Paul)
  • Mercy for Animals — they take a big, often controversial topic and make it approachable — and they have a massive, engaged volunteer force (Lindsey)
  • Truth and Reconciliation Commission Canada (Simona)
  • Generation Squeeze — taking on the impossible task of advocating for worklife balance, childcare and affordable housing on a living wage (ErikaD)

  • NYT documentary of bieber + skrillex + diplo –  Love the focus on storytelling and combo of graphics / animation. (Cassie)
  • model view culture — cranky and continuous analytic deconstructions of intersections between technology, inclusion, diversity with anger and no apologies and a paper journal that arrives on a regular basis. (@leahatplay)
  • Colors magazine — open contribution (Jordan)
  • the Unilever rapper campaign — because it was a long-stale pollution problem that was revitalized with creativity (Andrea)

  • Hollaback — uses online tools to work with young people and confront street harassment (Sara)
  • Craigslist — because their success is based on the assumption that most people are good. (David)
  • Dark Mountain — thinking through how WebLit does / does not survive in the anthropocene. (Chad)
  • NPR – They strike a successful balance between mass appeal and education. (Simon)

make things better -- 5319988695_22db1bded5_o

takeaways?

The above examples are…

  1. Crisp. Our group was able to communicate the story for each of these projects — in their own words, off the top of their head, in a single sentence. That means the mission is telegraphic, simple and sticky.
  2. Viral. Each of these organizations has succeeded in creating an influential, mini-evangelist to spread their story for them: you!
  3. Edgy.  Many of these examples have a bit of punk rock or social justice grit. They’re not wearing a bow tie.
  4. Diverse. There’s a broad range of stuff here, not just the usual tech / ed tech suspects. This is a party you’d want to be at.
  5. Real. There’s no jargon or planning language in any of the descriptions people provided — the language is authentic and human, because no one’s trying too hard. It’s just natural and unscripted.

Can we get to this same level of natural, edgy crispness for MoFo and our core strategies? Would others put us on a list like this? Food for thought.

Andrew HalberstadtLooking beyond Try Syntax

Today marks the 5 year anniversary of try syntax. For the uninitiated, try syntax is a string that you put into your commit message which a parser then uses to determine the set of builds and tests to run on your try push. A common try syntax might look like this:

try: -b o -p linux -u mochitest -t none

Since inception, it has been a core part of the Mozilla development workflow. For many years it has served us well, and even today it serves us passably. But it is almost time for try syntax to don the wooden overcoat, and this post will explain why.

A brief history on try syntax

In the old days, pushing to try involved a web interface called sendchange.cgi. Pushing is probably the wrong word to use, as at no point did the process involve version control. Instead, patches were uploaded to the web service, which in turn invoked a buildbot sendchange with all the required arguments. Like today try server was often overloaded, sometimes taking over 4 hours for results to come back. Unlike today there was no way to pick and choose which builds and tests you wanted, every try push ran the full set.

The obvious solution was to create a mechanism for people to do that. It was while brainstorming this problem that ted, bhearsum and jorendorff came up with the idea of encoding this information in the commit message. Try syntax was first implemented by lsblakk in bug 473184 and landed on August 27th, 2010. It was a simple time; the list of valid builders could fit into a single 30 line config file; Fennec still hadn't picked up full steam; and B2G wasn't even a figment of anyone's wildest imagination.

It's probably not a surprise to anyone that as time went on, things got more complicated. As more build types, platforms and test jobs were added, the try syntax got harder to memorize. To help deal with this, lsblakk created the trychooser syntax builder just a few months later. In 2011, pbiggar created the trychooser mercurial extension (which was later forked and improved by sfink). These tools were (and still are) the canonical way to build a try syntax string. Little has changed since then, with the exception of the mach try command that chmanchester implemented around June 2015.

One step forward, two steps back

Since around 2013, the number of platforms and test configurations have grown at an unprecendented rate. So much so, that the various trychooser tools have been perpetually out of date. Any time someone got around to adding a new job to the tools, two other jobs had already sprung up in its place. Another problem caused by this rapid growth, was that try syntax became finicky. There were a lot of edge cases, exceptions to the rule and arbitrary aliases. Often jobs would mysteriously not show up when they should, or mysteriously show up when they shouldn't.

Both of those problems were exacerbated by the fact that the actual try parser code has never had a definite owner. Since it was first created, there have never been more than 11 commits in a year. There have been only two commits to date in 2015.

Two key insights

At this point, there are two things that are worth calling out:

  1. Generating try strings from memory is getting harder and harder, and for many cases is nigh impossible. We rely more and more on tools like trychooser.
  2. Try syntax is sort of like an API on which these tools are built on top of.

What this means is that primary generators of try syntax have shifted from humans to tools. A command line encoded in a commit message is convenient if you're a human generating the syntax manually. But as far as tooling goes, try syntax is one god awful API. Not only do the tools need to figure out the magic strings, they need to interact with version control, create an empty commit and push it to a remote repository.

There is also tooling on the other side of the see saw, things that process the try syntax post push. We've already seen buildbot's try parser but taskcluster has a separate try parser as well. This means that your try push has different behaviour, depending on whether the jobs are scheduled in buildbot or taskcluster. There are other one off tools that do some try syntax parsing as well, including but not limited to try tools in mozharness, the try re-trigger bot and the AWSY dashboard. These tools are all forced to share and parse the same try syntax string, so they have to be careful not to step on each other's toes.

The takeaway here is that for tools, a string encoded as a commit message is quite limiting and a lot less convenient than say, calling a function in a library.

Despair not, young Padawan

So far we've seen how try syntax is finicky, how the tools that use it are often outdated and how it fails as an API. But what is the alternative? Fortunately, over the course of 2015 a lot of progress has been made on projects that for the first time, give us a viable alternative to try syntax.

First and foremost, is mozci. Mozci, created by armenzg and adusca, is a tool that hooks into the build api (with early support for taskcluster as well). It can do things like schedule builds and tests against any arbitrary pushes, and is being used on the backend for tools like adusca's try-extender with integration directly into treeherder planned.

Another project that improves the situation is taskcluster itself. With taskcluster, job configuration and scheduling all lives in tree. Thanks to bhearsum's buildbot bridge, we can even use taskcluster to schedule jobs that still live in buildbot. There's an opportunity here to leverage these new tools in conjunction with mozci to gain complete and total control over how jobs are scheduled on try.

Finally I'd like to call out mach try once more. It is more than a thin wrapper around try syntax that handles your push for you. It actually lets you control how the harness gets run within a job. For now this is limited to test paths and tags, but there is a lot of potential to do some cool things here. One of the current limiting factors is the unexpressiveness of the try syntax API. Hopefully this won't be a problem too much longer. Oh yeah, and mach try also works with git.

A glimpse into the crystal ball

So we have several different projects all coming together at once. The hard part is figuring out how they all tie in together. What do we want to tackle first? How might the future look? I want to be clear that none of this is imminent. This is a look into what might be, not what will be.

There are two places we mainly care about scheduling jobs on try.

First imagine you push your change to try. You open up treeherder, except no jobs are scheduled. Instead you see every possible job in a distinct greyed out colour. Scheduling what you want is as simple as clicking the desired job icons. Hold on a sec, you don't have to imagine it. Adusca already has a prototype of what this might look like. Being able to schedule your try jobs this way has a huge benefit: you don't need to mentally correlate job symbols to job names. It's as easy as point and click.

Second, is pushing a predefined set of jobs to try from the command line, similar to how things work now. It's often handy to have the try command for a specific job set in your shell history and it's a pain to open up treeherder for a simple push that you've memorized and run dozens of times. There are a few improvements we can do here:

  • We can move the curses ui feature of the hg trychooser extension into mach try.
  • We can use mozci to automatically keep the known list of jobs up to date. This is useful for things like generating the curses ui on the fly, validation and tab completion.
  • We can use mozci + taskcluster + buildbot bridge to provide a much more expressive API for scheduling jobs. For example, you could easily push a T-style try run.
  • We can expand some of the functionality in mach try for controlling how the harnesses are run, for example we could use it to enable some of the debugging features of the harness while investigating test failures.

Finally for those who are stuck in their ways, it should still be possible to have a "classic try syntax" front-end to the new mozci backend. As large as this change sounds, it could be mostly transparent to the user. While I'm certainly not a fan of the current try syntax, there's no reason to begrudge the people who are.

Closing words

Try syntax has served us well for 5 long years. But it's almost time to move on to something better. Soon a lot of new avenues will be open and tools will be created that none of us have thought of yet. I'd like to thank all of the people mentioned in this post for their contributions in this area and I'm very excited for what the future holds.

The future is bright, and change is for the better.

Air MozillaWeb QA Weekly Meeting

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Mike TaylorDynamically updating <meta viewport> in the year 2015.

18 months after writing the net-ward-winning article Dynamically updating <meta viewport> in the year 2014, I wrote some patches for Firefox for Android to make it possible to update a page's existing meta[name=viewport] element's content attribute and have the viewport be updated accordingly.

So when version 43 ships (at some point in 2015), code like this will work in more places than it did in 2014:

if(screen.width < 760) {
    viewport = document.querySelector("meta[name=viewport]");
    viewport.setAttribute('content', 'width=768');
}
if(screen.width > 760) {
    viewport = document.querySelector("meta[name=viewport]");
    viewport.setAttribute('content', 'width=1024');
}

I'll just go ahead and accept the 2015 netaward now, thanks for the votes everyone, wowowow.

Ian BickingConway’s Corollary

Conway’s Law states:

organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations

I’ve always read this as an accusation: we are doomed to recreate the structure of our organizations in the structure of software projects. And further: projects cannot become their True Selves, cannot realize the most superior design, unless the organization is itself practically structureless. That only without the constraints of structure can the engineer make the truly correct choices. Michelangelo sculpted from marble, a smooth and uniform stone, not from an aggregate, where any hit with the chisel might reveal only the chaotic structure and fault lines of the rock and not his vision.

But most software is built, not revealed. I’m starting to believe that Conway’s observation is a corollary, not so clearly cause-and-effect. Maybe we should work with it, not struggle against it. (With age I’ve lost the passion for pointless struggle.) It’s not that developers can’t imagine a design that goes contrary to the organizational structure, it’s that they can’t ship those designs. What we’re seeing is natural selection. And when through force of will such a design is shipped, that it survives and is maintained depends on whether that the organization changed in the process, whether a structure was created to support that design.

A second skepticism: must a particular construction and modularity of code be paramount? Code is malleable, and its modularity is for the purpose of humans. Most of what we do disappears anyway when the machine takes over – functions are inlined, types erased, the pieces become linked, and the machine doesn’t care one whit about everything we’ve done to make the software comprehensible. Modularity is to serve our purposes. And sometimes organization structure serves a purpose; we change it to meet goals, and we shouldn’t assume the people who change it are just busybodies. But those changes are often aspirational, and so those changes are setting ourselves up for conflict as the new structure probably does not mirror the software design.

If the parts of an organization (e.g. teams, departments, or subdivisions) do not closely reflect the essential parts of the product, or if the relationship between organizations do not reflect the relationships between product parts, then the project will be in trouble… Therefore: Make sure the organization is compatible with the product architecture – Coplien and Harrison

So change the architecture! There’s more than one way to resolve these tensions.

A last speculation: as described in the Second System Effect we see teams rearchitect systems with excessive modularity and abstraction. Maybe because they remember all these conflicts, they remember all the times organizational structure and product motivations didn’t match architecture. The team makes an incorrect response by creating an architecture that can simultaneously embody all imagined organizational structures, a granularity that embodies not just current organizational tensions but also organizational boundaries that have come and gone. But the value is only in predicting future changes in structure, and only then if you are lucky.

Maybe we should look at Conway’s Law as a prescription: projects should only have hard boundaries where there are organizational boundaries. Soft boundaries and definitions still exist everywhere: just like we give local variables meaningful names (even though outside the function no one can tell the difference), we might also create abstractions and modularity that serve immediate and concrete purposes. But they should only be built for the moment and the task at hand. Extra effort should be applied to being ready to refactor in the future, not predicting and embodying those predictions in present modularity. Perhaps this is another rephrasing of Agile and YAGNI. Code is a liability, agency over that code is an asset.

Nicholas NethercoteWhat does the OS X Activity Monitor’s “Energy Impact” actually measure?

[Update: this post has been updated with significant new information. Look to the end.]

Activity Monitor is a tool in Mac OS X that shows a variety of real-time process measurements. It is well-known and its “Energy Impact” measure (which was added in Mac OS X 10.9) is often consulted by users to compare the power consumption of different programs. Apple support documentation specifically recommends it for troubleshooting battery life problems, as do countless articles on the web.

However, despite its prominence, the exact meaning of the “Energy Impact” measure is unclear. In this blog post I use a combination of code inspection, measurements, and educated guesses to hypothesize how it is computed in Mac OS X 10.9 and 10.10.

What is known about “Energy Impact”?

The following screenshot shows the Activity Monitor’s “Energy” tab.

There are no units given for “Energy Impact” or “Avg Energy Impact”.

The Activity Monitor documentation says the following.

Energy Impact: A relative measure of the current energy consumption of the app. Lower numbers are better.

Avg Energy Impact: The average energy impact for the past 8 hours or since the Mac started up, whichever is shorter.

That is vague. Other Apple documentation says the following.

The Energy tab of Activity Monitor displays the Energy Impact of each open app based on a number of factors including CPU usage, network traffic, disk activity and more. The higher the number, the more impact an app has on battery power.

More detail, but still vague. Enough so that various other  people have wondered what it means. The most precise description I have found says the following.

If my recollection of the developer presentation slide on App Nap is correct, they are an abstract unit Apple created to represent several factors related to energy usage meant to compare programs relatively.

I don’t believe you can directly relate them to one simple unit, because they are from an arbitrary formula of multiple factors.

[…] To get the units they look at CPU usage, interrupts, and wakeups… track those using counters and apply that to the energy column as a relative measure of an app.

This sounds plausible, and we will soon see that it appears to be close to the truth.

A detour: top

First, a necessary detour. top is a program that is similar to Activity Monitor, but it runs from the command-line. Like Activity Monitor, top performs periodic measurements of many different things, including several that are relevant to power consumption: CPU usage, wakeups, and a “power” measure. To see all these together, invoke it as follows.

top -stats pid,command,cpu,idlew,power -o power -d

(A non-default invocation is necessary because the wakeups and power columns aren’t shown by default unless you have an extremely wide screen.)

It will show real-time data, updated once per second, like the following.

PID            COMMAND                  %CPU         IDLEW        POWER
50300          firefox                  12.9         278          26.6
76256          plugin-container         3.4          159          11.3
151            coreaudiod               0.9          68           4.3
76505          top                      1.5          1            1.6 
76354          Activity Monitor         1.0          0            1.0

The PID, COMMAND and %CPU columns are self-explanatory.

The IDLEW column is the number of package idle exit wakeups. These occur when the processor package (containing the cores, GPU, caches, etc.) transitions from a low-power idle state to the active state. This happens when the OS schedules a process to run due to some kind of event. Common causes of wakeups include scheduled timers going off and blocked I/O system calls receiving data.

What about the POWER column? top is open source, so its meaning can be determined conclusively by reading the powerscore_insert_cell function in the source code. (The POWER measure was added to top in OS X 10.9.0 and the code has remain unchanged all the way through to OS X 10.10.2, which is the most recent version for which the code is available.)

The following is a summary of what the code does, and it’s easier to understand if the %CPU and POWER computations are shown side-by-side.

|elapsed_us| is the length of the sample period
|used_us| is the time this process was running during the sample period

  %CPU = (used_us * 100.0) / elapsed_us

  POWER = if is_a_kernel_process()
            0
          else
            ((used_us + IDLEW * 500) * 100.0) / elapsed_us
          

The %CPU computation is as expected.

The POWER computation is a function of CPU and IDLEW. It’s basically the same as %CPU but with a “tax” of 500 microseconds for each wakeup and an exception for kernel processes. The value of this function can easily exceed 100 — e.g. a program with zero CPU usage and 3,000 wakeups per second will have a POWER score of 150 — so it is not a percentage. In fact, POWER is a unitless measure because it is a semi-arbitrary combination of two measures with incompatible units.

Back to Activity Monitor and “Energy Impact”

MacBook Pro running Mac OS X 10.9.5

First, I did some measurements with a MacBook Pro with an i7-4960HQ processor running Mac OS X 10.9.5.

I did extensive testing with a range of programs: ones that trigger 100% CPU usage; ones that trigger controllable numbers of idle wakeups; ones that stress the memory system heavily; ones that perform frequent disk operations; and ones that perform frequent network operations.

In every case, Activity Monitor’s “Energy Impact” was the same as top‘s POWER measure. Every indication is that the two are computed identically on this machine.

For example, consider the data in the following table,  The data was gathered with a small test program that fires a timer N times per second; other than extreme cases (see below) each timer firing causes an idle platform wakeup.

-----------------------------------------------------------------------------
Hz     CPU ms/s   Intr        Pkg Idle   Pkg Power  Act.Mon. top
-----------------------------------------------------------------------------
     2     0.14        2.00       1.80     2.30W     0.1    0.1
   100     4.52      100.13      95.14     3.29W       5      5
   500     9.26      499.66     483.87     3.50W      25     25
  1000    19.89     1000.15     978.77     5.23W      50     50
  5000    17.87     4993.10    4907.54    14.50W     240    240
 10000    32.63     9976.38    9194.70    17.61W     485    480
 20000    66.66    19970.95   17849.55    21.81W     910    910
 30000    99.62    28332.79   25899.13    23.89W    1300   1300
 40000   132.08    37255.47   33070.19    24.43W    1610   1650
 50000   160.79    46170.83   42665.61    27.31W    2100   2100
 60000   281.19    58871.47   32062.39    29.92W    1600   1650
 70000   276.43    67023.00   14782.03    31.86W     780    750
 80000   304.16    81624.60     258.22    35.72W      43     45
 90000   333.20    90100.26     153.13    37.93W      40     42
100000   363.94    98789.49      44.18    39.31W      38     38

The table shows a variety of measurements for this program for different values of N. Columns 2–5 are from powermetrics, and show CPU usage, interrupt frequency, and package idle wakeup frequency, respectively. Column 6 is Activity Monitor’s “Energy Impact”, and column 7 is top‘s POWER measurement. Column 6 and 7 (which are approximate measurements) are identical, modulo small variations due to the noisiness of these measurements.

MacBook Air running Mac OS X 10.10.4

I also tested a MacBook Air with an i5-4250U processor running Mac OS X 10.10.4. The results were substantially different.

-----------------------------------------------------------------------------
Hz     CPU ms/s   Intr        Pkg Idle   Pkg Power Act.Mon. top
-----------------------------------------------------------------------------
     2     0.21        2.00       2.00     0.63W   0.0     0.1
   100     6.75       99.29      96.69     0.81W   2.4     5.2
   500    22.52      499.40     475.04     1.15W   10       25
  1000    44.07      998.93     960.59     1.67W   21       48
  3000   109.71     3001.05    2917.54     3.80W   60      145
  5000    65.02     4996.13    4781.43     3.79W   90      230
  7500   107.53     7483.57    7083.90     4.31W   140     350
 10000   144.00     9981.25    9381.06     4.37W   190     460

The results from top are very similar to those from the other machine. But Activity Monitor’s “Energy Impact” no longer matches top‘s POWER measure. As a result it is much harder to say with confidence what “Energy Impact” represents on this machine. I tried tweaking the previous formula so that the idle wakeup “tax” drops from 500 microseconds to 180 or 200 microseconds and that gives results that appear to be in the ballpark but don’t match exactly. I’m a bit skeptical whether Activity Monitor is doing all its measurements at the same time or not. But it’s also quite possible that other inputs have been added to the function that computes “Energy Impact”.

What about “Avg Energy Impact”?

What about the “Avg Energy Impact”? It seems reasonable to assume it is computed in the same way as “Energy Impact”, but averaged over a longer period. In fact, we already know that period from the Apple documentation that says it is the “average energy impact for the past 8 hours or since the Mac started up, whichever is shorter.”

Indeed, when the Energy tab of Activity Monitor is first opened, the “Avg Energy Impact” column is empty and the title bar says “Activity Monitor (Processing…)”. After a few seconds the “Avg Energy Impact” column is populated with values and the title bar changes to “Activity Monitor (Applications in last 8 hours)”. If you have top open during those 5–10 seconds can you see that systemstats is running and using a lot of CPU, and so presumably the measurements are obtained from it.

systemstats is a program that runs all the time and periodically measures, among other things, CPU usage and idle wakeups for each running process (visible in the “Processes” section of its output.) I’ve done further tests that indicate that the “Avg Energy Impact” is almost certainly computed using the same formula as “Energy Impact”. The difference is that the the measurements are from the past 8 hours of wake time — i.e. if a laptop is closed for several hours and then reopened, those hours are not included in the calculation — as opposed to the 1, 2 or 5 seconds of wake time used for “Energy Impact”.

battery status menu

Even more prominent than Activity Monitor is OS X’s battery status menu. When you click on the battery icon in the OS X menu bar you get a drop-down menu which includes a list of “Apps Using Significant Energy”.

Screenshot of the OS X battery status menu

How is this determined? When you open this menu for the first time in a while it says “Collecting Power Usage Information” for a few seconds, and if you have top open during that time you see that, once again, systemstats is running and using a lot of CPU. Furthermore, if you click on an application name in the menu Activity Monitor will be opened and that application’s entry will be highlighted. Based on these facts it seems reasonable to assume that “Energy Impact” is again being used to determine which applications show up in the battery status menu.

I did some more tests (on my MacBook Pro running 10.9.5) and it appears that once an energy-intensive application is started it takes about 20 or 30 seconds for it to show up in the battery status menu. And once the application stops using high amounts of energy I’ve seen it take between 4 and 10 minutes to disappear. The exception is if the application is closed, in which case it disappears immediately.

Finally, I tried to determine the significance threshold. It appears that a program with an “Energy Impact” of roughly 20 or more will eventually show up as significant, and programs that have much higher “Energy Impact” values tend to show up more quickly.

All of these battery status menu observations are difficult to make reliably and so should be treated with caution. They may also be different in OS X 10.10. It is clear, however, that the window used by the battery status menu is measured in seconds or minutes, which is much less than the 8 hour window used for “Avg Energy Impact”.

An aside: systemstats is always running on OS X. The particular invocation used for the long-running instance — the one used by both Activity Monitor and the battery status menu — takes the undocumented --xpc flag. When I tried running it with that flag I got an error message saying “This mode should only be invoked by launchd”. So it’s hard to know how often it’s making measurements. The output from vanilla command-line invocations indicate it’s about every 10 minutes.

But it’s worth noting that systemstats has a -J option which causes the CPU usage and wakeups for child processes to be attributed to their parents. It seems likely that the --xpc option triggers the same behaviour because the Activity Monitor does not show “Avg Energy Impact” for child processes (as can be seen in the screenshot above for the login, bash and vim processes that are children of the Terminal process). This hypothesis also matches up with the battery status menu, which never shows child processes. One consequence of this is that if you ssh into a Mac and run a power-intensive program from the command line it will not show up in Activity Monitor’s energy tab or the battery status menu, because it’s not attributable to a top-level process such as Terminal! Such processes will show up in top and in Activity Monitor’s CPU tab, however.

How good a measure is “Energy Impact”?

We’ve now seen that “Energy Impact” is used widely throughout OS X. How good a measure is it?

The best way to measure power consumption is to actually measure power consumption. One way to do this is to use an ammeter, but this is difficult. Another way is to measure how long it takes for the battery to drain, which is easier but slow and requires steady workloads. Alternatively, recent Intel hardware provides high-quality estimates of processor and memory power consumption that are relatively easy to obtain.

These approaches all have the virtue of measuring or estimating actual power consumption (i.e. Watts). But the big problem is that they are machine-wide measures that cannot be used on a per-process basis. This is why Activity Monitor uses several proxy measures — ones that correlate with power consumption — which can be measured on a per-process basis. “Energy Impact” is a hybrid of at least two different proxy measures: CPU usage and wakeup frequency.

The main problem with this is that “Energy Impact” is an exaggerated measure. Look at the first table above, with data from the 10.9.5 machine. The variation in the “Pkg Power” column — which shows the package power from the above-mentioned Intel hardware estimates — is vastly smaller than the variation in the “Energy Impact” measurements. For example, going from 1,000 to 10,000 wakeups per second increases the package power by 3.4x, but the “Energy Impact” increases by 9.7x, and the skew gets even worse at higher wakeup frequencies. “Energy Impact” clearly weights wakeups too heavily. (In the second table, with data from the 10.10.4 machine, the weight given to wakeups is less, but still too high.)

Also, in the first table “Energy Impact” actually decreases when the timer frequency gets high enough. Presumably this is because the timer interval is so short that the OS has trouble putting the package into a idle power state. This leads to the absurd result that firing a timer at 1,000 Hz has about the same “Energy Impact” value as firing one at 100,000 Hz, when the package power of the latter is about 7.5x higher.

Having said all that, it’s understandable why Apple uses formulations of this kind for “Energy Impact”.

  • CPU usage and wakeup frequency are probably the two most important factors affecting a process’s power consumption, and they are factors that can be measured on a per-process basis.
  • Having a single measure makes things easy for users; evaluating the relative important of multiple measures is more difficult.
  • The exception for kernel processes (which always have an “Energy Impact” of 0) avoids OS X itself being blamed for high power consumption. This makes a certain amount of sense — it’s not like users can close the kernel — while also being somewhat misleading.

If I were in charge of Apple’s Activity Monitor product, I’d do two things.

  1. I would compute a new formula for “Energy Impact”. I would measure the CPU usage, wakeup frequency (and any other inputs) and actual power consumption for a range of real-world programs, on a range of different Apple machines. From this data, hopefully a reasonably accurate model could be constructed. It wouldn’t be perfect, and it wouldn’t need to be perfect, but it should be possible to come up with something that reflects actual power consumption better than the existing formulations. Once formulated, I would then test the new version against synthetic microbenchmarks, like the ones I used above, to see how it holds up. Given the choice between accurately modelling real-world applications and accurately modelling synthetic microbenchmarks, I would definitely favour the former.
  2. I would publicly document the formula that is used so that developers can actually tell how their applications are being evaluated, and can optimize for that measure. You may think “but then developers will be optimizing for a synthetic measure rather than a real one” and you’d be right. That’s an inevitable consequence of giving a synthetic measure such prominence, and all the more reason for improving it.

Conclusion

“Energy Impact” is a flawed measure of an application’s power consumption. Nonetheless, it’s what many people use at this moment to evaluate the power consumption of OS X applications, so it’s worth understanding. And if you are an OS X application developer who wants to reduce the “Energy Impact” of your application, it’s clear that it’s best to focus first on reducing wakeup frequency, and then on reducing CPU usage.

Because Activity Monitor is closed source code I don’t know if I’ve characterized “Energy Impact” exactly correctly. The evidence given above indicates that I am close on 10.9.5, but not as close on 10.10.4. I’d love to hear if anybody has evidence that either corroborates or contradicts the conclusions I’ve made here. Thank you.

Update

A commenter named comex has done some great detective work and found on 10.10 and 10.11 Activity Monitor consults a Mac model-specific file in the /usr/share/pmenergy/ directory. (Thank you, comex.)

For example, my MacBook Air has a model number 7DF21CB3ED6977E5 and the file Mac-7DF21CB3ED6977E5.plist has the following list of key/value pairs under the heading “energy_constants”.

kcpu_time               1.0
kcpu_wakeups            2.0e-4

This matches the previously seen formula, but with the wakeups “tax” being 200 microseconds, which matches what I hypothesized above.

kqos_default            1.0e+00
kqos_background         5.2e-01
kqos_utility            1.0e+00
kqos_legacy             1.0e+00         
kqos_user_initiated     1.0e+00
kqos_user_interactive   1.0e+00

“QoS” refers to quality of service classes which allow an application to mark some of its own work as lower priority. I’m not sure exactly how this is factored in, but from the numbers above it appears that operations done in the lowest-priority “background” class is considered to have an energy impact of about half that done in all the other classes.

kdiskio_bytesread       0.0
kdiskio_byteswritten    5.3e-10

These ones are straightforward. Note that the “tax” for disk reads is zero, and for disk writes it’s a very small number. I wrote a small program that wrote endlessly to disk and saw that the “Energy Impact” was slightly higher than the CPU percentage alone, which matches expectations.

kgpu_time               3.0e+00

It makes sense that GPU usage is included in the formula. It’s not clear if this refers to the integrated GPU or the separate (higher performance, higher power) GPU. It’s also interesting that the weighting is 3x.

knetwork_recv_bytes     0.0 
knetwork_recv_packets   4.0e-6
knetwork_sent_bytes     0.0
knetwork_sent_packets   4.0e-6

These are also straightforward. In this case, the number of bytes sent is ignored, and only the number of packets matter, and the cost of reading and writing packets is considered equal.

So, in conclusion, on 10.10 and 10.11, the formula used to compute “Energy Impact” is machine model-specific, and includes the following factors: CPU usage, wakeup frequency, quality of service class usage, and disk, GPU, and network activity.

This is definitely an improvement over the formula used in 10.9, which is great to see. The parameters are also visible, if you know where to look! It would be wonderful if all these inputs, along with their relative weightings, could be seen at once in Activity Monitor. That way developers would have a much better sense of exactly how their application’s “Energy Impact” is determined.

Jonathan GriffinEngineering Productivity Update, August 26, 2015

It’s PTO season and many people have taken a few days or week off.  While they’re away, the team continues making progress on a variety of fronts.  Planning also continues for GoFaster and addon-signing, which will both likely be significant projects for the team in Q4.

Highlights

Treeherder: camd rolled out a change which collapses chunked jobs on Treeherder, reducing visual noise.  In the future, we plan on significantly increasing the number of chunks of many jobs in order to reduce runtimes, so this change makes that work more practical.  See camd’s blog post.  emorley has landed a change which allows TaskCluster job errors that occur outside of mozharness to be properly handled by Treeherder.

Automatic Starring: jgraham has developed a basic backend which supports recognizing simple intermittent failures, and is working on integrating that into Treeherder; mdoglio is landing some related database changes. ekyle has received sheriff training from RyanVM, and plans to use this to help improve the automated failure recognition algorithm.

Perfherder and Performance Testing: Datazilla has finally been decommissioned (R.I.P.), in favor of our newest performance analysis tool, Perfherder.  A lot of Talos documentation updates have been made at https://wiki.mozilla.org/Buildbot/Talos, including details about how we perform calculations on data produced by Talos.  wlach performed a useful post-mortem of Eideticker, with several takeaways which should be applicable to many other projects.

MozReview and Autoland: There’s a MozReview meetup underway, so expect some cool updates next time!

TaskCluster Support: ted has made a successful cross-compiled OSX build using TaskCluster!  Take it for a spin.  More work is needed before we can move OSX builds from the mac mini builders to the cloud.

Mobile Automation: gbrown continues to make improvements on the new |mach emulator| command which makes running Android tests locally on emulator very simple.

General Automation: run-by-dir is live on opt mochitest-plain; debug and ASAN coming soon.  This reduces test “bleed-through” and makes it easier to change chunking.  adusca, our Outreachy intern, is working to integrate the try extender into Treeherder.  And ahal has merged the mozharness “in-tree” configs with the regular mozharness config files, now that mozharness lives in the tree.

Firefox Automation: YouTube ad detection has been improved for firefox-media-tests by maja, which fixes the source of the top intermittent failure in this suite.

Bughunter: bc has got asan-opt builds running in production, and is working on gtk3 support.

hg.mozilla.org: gps has enabled syntax highlighting in hgweb, and has added a new JSON API as well.  See gps’ blog post.

The Details

bugzilla.mozilla.org
Treeherder
Perfherder/Performance Testing
  • talos cleanup and preparation to move in-tree
  • perfherder database cleanup in progress for simpler and more optimized queries. This is mainly preparatory work for making perfherder capable of managing/starring performance alerts, but as a bonus perfherder compare view should load virtually instantly once this is finished. 
  • most talos wiki docs are updated: https://wiki.mozilla.org/Buildbot/Talos
TaskCluster Support
Mobile Automation
  •  [gbrown] Working on “mach emulator” support: wip can download and run 2.3, 4.3, or x86 emulator images. Integrating with other mach commands like “install” and “mochitest”.
  •  [gbrown] Updated mochitest manifests to run most dom/media mochitests on Android 4.3 (under review, bug 1189784)
Firefox and Media Automation
  • [maja_zf] Improved ad detection on YouTube for firefox-media-tests, which fixes our top intermittent failure for long-running playback tests.
General Automation
  •  run-by-dir is live for mochitest-plain (opt only); debug is coming soon, followed by ASAN.
  • Mozilla CI tools is moving from using BuildAPI as the scheduling entry point to use TaskCluster’ scheduling. This work will allow us to schedule a graph of buildbot jobs and their dependencies in one shot. https://bugzil.la/1194264
  • adusca is integrating into treeherder the ability to extend the jobs run for any push. This is based on the http://try-extender.herokuapp.com prototype. Follow along in https://bugzil.la/1194830
  • Git was deployed to the test machines. This is necessary to make the Firefox UI update tests work on them.
  • [ahal] merge mozharness in-tree configs with the main mozharness configs
ActiveData
  • Bug fixes to the ETL – fix bad lookups on hg repo, mostly l10n builds 
  • More error reporting on ETL – Structured logging has changed a little, handle the new variations, and be more elegant when it comes to unknowns, an complain when there is non-conformance.
  • Some work on adding hg `repo` table – acts as a cache for ETL, but can be used to calculate ‘per push’ statistics on OrangeFactor data.
  • Added Talos to the `perf` table – used the old Datazilla ETL code to fill the ES cluster.  This may speed up extracting the replicates, for exploring the behaviour of a test.
  • Enable deep queries – Effectively performing SQL join on Elasticsearch – first attempt did too much refactoring.  Second attempt is simpler, but still slogging through all the resulting test breakage
hg.mozilla.org
WebDriver
  • Updated 
Marionette
  • [ahal] helped review and finish contributor patch for switching marionette_client from optparse to argparse
  • Corrected UUID used for session ID and element IDs
  • Updated dispatching of various marionette calls in Gecko
bughunter
  • [bc] Have asan-opt builds running in production. Finalizing patch. Still need to build gtk3 for rhel6 32bit in order to stop using custom builds and support opt in addition to debug.
charts.mozilla.org
  • Updated the hierarchical burndowns to EPM’s important metabugs that track features 
  • More config changes

Support.Mozilla.OrgSUMO Questions Day this Thursday, 27 August 2015

The summer holidays are now over  so it’s time to start organizing a new SUMO Day!

What are SUMO Days?

A SUMO Day is that time of the week where everybody who loves doing support, contributors, admins, moderators gather together and try and answer all the incoming questions on the Mozilla support forums. This is a 24 hour event, we will start early during European mornings  and finish late during US Pacific evenings.

We are also hanging out having fun and helping each other in #sumo on IRC.

I want to participate! Where do I start?

Just create an account and then take some time to help with unanswered questions. We have an etherpad ready with all the details plus additional tips and resources.

If you get stuck with questions that are too difficult feel free to ping us on IRC #sumo or ask for help on the contributors forums.

Moderators

SUMO Day will be moderated by madasan (EU morning/afternoon), marksc (EU afternoon/US morning), guigs (US morning/afternoon). We can always use more people to help moderate through the day so if you would like to do this just add your name in the etherpad!

What does it mean to be a SUMO Day moderator?

It’s easy! Just check out the forums and monitor incoming questions. Don’t forget to hang out on IRC on #sumo and the contributor forums and chat with the other SUMO Day participants about possible solutions to questions. As a moderator you also help out contributors who are stuck with difficult questions and need help.

Screensharing experiment

During this SUMO Day some of us will experiment with helping users via screen sharing. This is only open to senior contributors and forum moderators so if you’re one of them and you would like to participate please PM madasan.

 We’re trying to answer each and every incoming question on the support forum on Thursday so please join us. The more the merrier!

 

See you online and happy SUMO Day!

Air MozillaProject Beehive: A HW/SW co-designed stack for runtime and architectural research.

Project Beehive: A HW/SW co-designed stack for runtime and  architectural research. In this talk we will present an overview of our recent research efforts focusing on Hw/Sw co-designed platform for heterogeneous many-core architectural research. The presented...

Will Kahn-Greenepytest-wholenodeid addon: v0.2 released!

What is it?

pytest-wholenodeid is a pytest addon that shows the whole node id on failure rather than just the domain part. This makes it a lot easier to copy and paste the entire node id and re-run the test.

v0.2 released!

I wrote it in an hour today to make it easier to deal with test failures. Then I figured I'd turn it into a real project so friends could use it. Now you can use it, too!

I originally released v0.1 (the first release) and then noticed on PyPI that the description was a mess, so I fixed that and released v0.2.

To install:

pip install pytest-wholenodeid

It runs automatically. If you want to disable it temporarily, pass the --nowholeid argument to pytest.

More details on exactly what it does on the PyPI page.

If you use it and find issues, write up an issue in the issue tracker.