Mozilla Add-ons BlogWant to help select featured add-ons?

Six months have gone by quickly, and it’s time again to choose new members for the featured add-ons board. Board members are responsible for deciding which add-ons are featured on AMO in the next six months. Featured add-ons help users discover what’s new and useful, and downloads spike in the months they are featured, so your participation really makes an impact!

Anyone from the add-ons community is welcome to apply: power users, theme designers, developers, and evangelists. Priority will be given to applicants who have not served on the board before, followed by those from previous boards, and finally from the outgoing board. This page provides more information on the duties of a board member.

To be considered, please email us at amo-featured@mozilla.org with your name, and tell us how you’re involved with AMO. The deadline is Sunday, Nov 9, 2014 at 23:59 PDT. The new board will be announced about a week after.

We look forward to hearing from you!

Air MozillaWebmaker Demos October 31

Webmaker Demos October 31 Webmaker Demos October 31

Open Policy & AdvocacyNet Neutrality in the U.S. Reaches a Tipping Point

We’ve spent years working to advance net neutrality all around the world. This year, net neutrality in the United States became a core focus of ours because of the major U.S. court decision striking down the existing Federal Communications Commission (FCC) rules. The pressure for change in the U.S. has continued to grow, fueled by a large coalition of public interest organizations, including Mozilla, and by the voices of millions of individual Americans.

In May, we filed a petition to the FCC to propose a new path forward to adopt strong, user- and innovation-protecting rules once and for all. We followed that up by mobilizing our community, organizing global teach-ins on net neutrality, and submitting initial comments and reply comments to the agency. We also joined a major day of action and co-authored a letter to the President. We care about this issue.

Net neutrality has now reached a tipping point. As the days grow shorter, the meetings on the topic grow longer. We believe the baseline of what we can expect has gone up, and now, rumored likely outcomes all include some element of Title II, or common carrier, protections sought by advocates against significant opposition. We don’t know what that will look like, or whether the baseline has come up enough. Still, we are asking the FCC for what we believe the open Internet needs to ensure a level playing field for user choices and innovation.

Our baseline:
• Strong rules against blocking and discrimination online, to prevent the open, generative Internet from being closed off by gatekeepers in the access service;
• Title II in the “last mile” – the local portion of the network controlled by the Internet access service provider – to help ensure the FCC’s authority to issue net neutrality rules will survive challenge; and
• The same framework and rules applied to mobile as well as fixed access services.

Our Petition focused on the question of where the FCC derives its authority. We told the FCC we support both hybrid classification proposals and reclassification, and, choosing between the two, we prefer reclassification as the simplest, cleanest path forward. But we believe both paths would allow the FCC to adopt the same strong rules to protect the open Internet, and survive court review.

We don’t know where this proceeding will end up. We will continue to do whatever we can to achieve our baseline. Stay tuned.

SUMO BlogWhat’s up with SUMO – 31 October

A spooky welcome to all of our readers, ’tis the season to be scary (in some parts of the world). We hope you’re having a good time now (or will be having some soon enough).

New arrivals to SUMO – greetings and salutations go out to

SUMO Community meeting update

We did not meet last Monday due to too many people being en route and absent from the Vidyo stream – sorry! You will find all the most relevant updates below, of course.

A quick note on Community meetings in general: if you want to contribute a discussion item (we love hearing your thoughts and ideas!), please, please, please – post it on the Contributor Forums first, so that everyone in the community can take a look, think about it and give you feedback before we meet on Monday to discuss it. The sooner you do it, the more people will be aware of your voice. Thank you!

Mozilla Community Survey

Mozilla’s Community Building Team is conducting a study to learn about the values and motivations of Mozilla’s contributors (this means you), and to understand how we can improve their (this means your) experiences. To learn about the relationship between contributors’ values and contributions, Mozilla’s Community Building Team would also like to conduct analyses that takes account of everyone’s contributions.

If there’s only one more ink you’re going to click before the weekend, this is it. Please help us help you help others even more. Take the survey here.

Thunderbird Feedback Request

We need your thoughts and ideas on the Gmail and OAUTH article. Take a look here: https://support.mozilla.org/en-US/kb/thunderbird-and-gmail/history and either email rtanglao AT mozilla.com or ping rolandtanglao in IRC… or edit the KB article yourself!

Post-MozFest update

If you can’t get enough of MozFest, do remember that people keep posting new stuff all the time on Twitter. There’s also a great round-up of a lot of interesting projects that happened at the festival on the Webmaker blog, and here’s one from The Guardian.

I delivered a session on l10n (slides here) and got to meet some of you in person. I also attended quite a few inspiring and informative sessions (it’s not possible to attend them all, unfortunately), the experience of which will definitely help me make SUMO better for you.

In short, the energy was amazing, the pacing was relentless and the people were fantastic. If you have the spirit and skills of an aspiring creator, influencer or experimenter – you should be there in 2015.

That’s it for today, tune in next week for more news and updates. As usual, remember that you can find us on Twitter or on IRC. We’re looking forward to talking to you!

QMOFirefox 34 Beta 7 Testday, November 7th

Hello mozillians,

We are happy to announce that Friday, November 7th, we’re going to hold the Firefox 34.0 Beta 7  Testday. We will be testing the latest Beta build, with focus on the most recent changes and fixes. Detailed instructions on how to get involved can be found in this etherpad.

No previous testing experience is required so feel free to join via #qa IRC channel and our moderators will offer you guidance and answer your questions as you go along.

Join us next Friday and let’s make Firefox better together!

When: November 7, 2014.

Software CarpentryParticle Physicists Pulling Themselves From The Swamp

What does it mean to work on a modern particle physics experiment like ATLAS (wikipedia, public) or CMS (wikipedia, public) at the Large Hadron Collider in the 21st century? It's fun, it's collaborating with great and interesting people, it's challenging, it's making you enjoy finding things out, it's what I always wanted to do. Also: it is painful, discouraging, and tends to suck the life out of a young mind. Confused? Let's rewind...

Five years ago I was a young man entering a field that I knew from textbook excerpts and stories of Nobel prizes. This field centered around a handful of large laboratories worldwide that host some of the most advanced technology mankind has ever build: particle accelerators. Billions are invested to build a machine that accelerates particles onto a near light speed orbit in order to bring them to a high energetic collision. Around these fixed collision points, multi-purpose experiments of the size of a multi-apartment houses record the particles thus produced to reconstruct what happened during the event and obtain deeper knowledge of the underlying physics.

YouTube video produced by the ATLAS collaboration explaining what we do, ATLAS Experiment (c) 2014 CERN

If I say multi-apartment houses, you might imagine that analyzing the data of these experiments is far from trivial. The collaborations running these machines provide physicists with object-oriented frameworks (cmssw, gaudi, athena) of the order of 7-10 kLOC (thousands of lines of code) that facilitate the read-out, simulation, filtering and analysis of this data. Sophisticated statistics software is used further down the pipeline to produce publication grade plots, make statistical inferences, and eventually extract knowledge.

Why is it so frustrating then? University curricula for physicists hardly ever contains programming skills, much less conceptual knowledge of object-oriented programming, revision control, software design, parallel computing, and the like. Neither do prep courses in early stages of a PhD. Most of the time, wiki pages allowing copy-and-paste style learning-by-doing that get you going but never explain underlying principles. But I am a scientist: I need to understand what is going on and use my tools to the best of my knowledge to make inferences.

ATLAS experiment open detector, ATLAS Experiment (c) 2014 CERN

One and a half year into my thesis, I personally was so frustrated of spending hours in front of the computer understanding other people's code, that I was dedicated to change the situation. At the same time, a federal German funding agency initiated a German network for particle physics, the Helmholtz Alliance "Physics at the Terascale", to induce higher scientific throughput just in time with the LHC coming online in 2009. This funding program also contained budgets for training of staff. Motivated by a never-ending and daily struggle to work with object-oriented frameworks that were poorly documented and yet state-of-the-art, I started to inquire about the possibility of organizing a workshop on software design principles. It was my hope that by transferring knowledge from computer science to physics students at the keyboard, we could pull ourselves out of the swamp and finally understand why we program as we do, how we can work effectively with code (both our own and others') and as such be more productive. After all, I chose to be a PhD student to do physics, not fight code!

Lord Muenchhausen pulls himself and his horse from the swamp (German tall tale)

Due to the support of my supervisor and the motivation of my fellow students and post-doc, we were able to find potent speakers. The only thing left was getting the money in and advertising the idea to principal investigators so they'd send their students around. And I can tell you, there were many PIs not willing to back us. After all, most of the group leaders made their day during the Fortran age in particle physics. So there was a deep cultural canyon between their view of how people should work and what everyone faced in everyday development.

It took a year to setup the first workshop in 2010 that welcomed 25 participants. We started by recapping object-oriented programming, then introduced UML, and finally climbed the hill to discuss design patterns, class-design and package-design principles. The week-long workshop was concluded by a student exercise project that lasted almost an entire day. We also had two keynote speakers from the trenches of the local software industry (which was quite a clash of cultures, but a very insightful experience).

Given today's standards, many of the details of the workshop were not very well thought through, I believe. It was a lot of content for 4.5 days and was not always paired with exercises or the like. But we ingested a lot of cool stuff to feed our curiosity. Finally, someone taught us the fundamental concepts we needed so desperately to know in order to grasp what were doing every single day. This made my (professional) life a lot happier than it was before. And I would even claim, it made me much more productive as a scientist.

Many participants gave us very positive feedback:

This workshop should be on everyone's curriculum in Particle Physics.

-- A participant of the 2014 workshop

Since then, I've been lucky enough to organize another workshop one year later. We again had 25 participants with a keynote by a local software consultancy CEO. Further, I was myself able to deliver the knowledge that I had acquired through the past workshop and give a talk on my own on test-driven development. The following year, the workshop started to travel and was hosted by DESY in Hamburg. We had more than 50 participants there, which proved to be a challenge. We were used to having a small number of people, which meant we could manage to cover a lot of material by dynamically adapting our speed and depth. But that approach does not scale! So at the end of a tough week, we again received a lot of positive feedback, but we had to admit that 30-35 people is a good size to go with.

During that time, the focus of the workshop changed a bit. We were still covering object-oriented programming, good design practices, and design patterns, but we added refactoring (thanks to an excellent new contributor) because it relates much more with the day-to-day situation students face: sit down and use the code of others. The number of exercises steadily increased, and I believe we are slowly converging to a 1:1 ratio of exercises to lectures. Finally, the scope of the audience became wider. We had to acknowledge that most particle or nuclear physics related sciences also have substantial need for software training. For example in the field of detector construction, both the nuclear and particle physics community commonly use GEANT (which is again a object-oriented framework) to set up and run detector simulations in a modular fashion. Participants from these fields repeatedly told us that the sort of training we offer is needed.

This year, the workshop was hosted at Munich. Triggered by Greg Wilson's PyCon talk, I tried to use some more interactive teaching elements like sticky notes, an etherpad, and live-coding mixed with a lot of pair work (not pair programming yet). I have to say that especially people in this environment, where they feel they lack competence (and because they are physicists ;) ) are a difficult crowd. Sticky notes were not adopted at all or I didn't "motivate" them enough, so I dropped them half-way through the session.

On the other hand, having the etherpad was great success. It allowed me to bring the teaching into the notebook that everyone likes to hide behind. Live coding also worked out extremely well. I used it to teach C++ template meta programming within one entire morning. The topic is quite complicated, but live coding helped me adapt the speed and ensure that everyone can follow and reproduce my demonstration on their own. There was constant feedback by the participants and people helped each other out. To be honest, I was surprised that live coding worked so well with a crowd of 33 students.

Last, I put all my code and slides on GitHub (see the Performance versus Design C++ repository) for the students to share and fork. I have to say that this did not receive the attention by the participants I hoped it would. But that might have technical or usability reasons or simply that particle physics is mostly an ecosphere of it's own, i.e. GitHub and the like are not yet common tools.

To conclude, I think we are well on on the way to establishing a software development focused training curriculum in the particle physics community. Promoted by this blog post, we will start to publish our experiences if possible at conferences and peer-reviewed journals in order to receive feedback, straighten our quality assurance, and bring our experiences and motivation to the attention of more people. We hope our courses can be adapted in other countries or big laboratories or even lead to a mind change of PIs:

The data volumes at LHC are steadily increasing, thus the analyses are becoming more complex and so become the list of systematic uncertainties to be studied. One is forced to write good code if you want to be flexible and fast.

-- German Particle Physics Group Leader from Bonn (translated from German)

Not only that, but the workshop is also being recognized and appreciated by all involved (PIs and students):

I've been sending students to this workshop for many years. Even though many of the students went to the workshop with a let's-see-if-that-will-help attitude, they always came back full of motivation to code and a lot of important insights how to code. After the workshop, they developed a high enthusiasm for well designed code. Thank you very much for organizing the workshop. It is really well done! Keep the level where it is now.

-- German Particle Physics Group Leader from Aachen (translated from German)

Lastly, I would like to thank the individuals that have made the workshop a success over the last years. The core team that back the workshop and are ready to present annually are: Thomas Schörner-Sadenius (DESY Hamburg), Maria Pia Grazia (INFN Genoa), Stefan Kluth (MPI for Physics, Munich) and myself. Apart from these, I'd like to mention past contributors: Thomas Velz (University of Bonn, now industry) was a participant once in the workshop and this year contributed as a teacher! Benedikt Hegner (CERN) and Eckhardt von Toerne (University of Bonn) both made substantial contributions in the past as well. Also, my gratitude goes towards my supervisor at TU Dresden, Michael Kobel, and my colleagues there (most of all Wolfgang Mader) who supported me to organize the workshops locally and motivated me throughout.

Air MozillaTechWomen Emerging Leader Presentations [MTV]

TechWomen Emerging Leader Presentations [MTV] As part of the TechWomen program, three Emerging Leaders from Cameroon, Lebanon and Kenya will give short lightning talk presentations on their work and professional...

Mozilla IndiaFirefox OS Intensive Workshop in Bangalore and Delhi – Save the dates!

Are you an intermediate or experienced Web developer who is either interested in Firefox OS, or have tested the waters but want to build richer, more compelling apps?

Join us for a two-day intensive workshop either in Delhi (Nov 8-9) or in Bangalore (Nov 16-17), focused on Firefox OS advanced application development, covering topics such as user research, developer tools, performance, power and data efficiency, toolkits and libraries, debugging, all with a focus on the 128MB devices shipping in India today. Learn how to write code for a range of typical use-cases, utilizing all of the features that the 128MB devices have to offer, with an overall goal of understanding your users and building powerful, relevant and performant apps that they’ll love.

Apply Today!

Loading…

P.S: Participation is free and food/drinks will be served throughout the workshop. Venues of the workshops to be confirmed shortly. Space is limited to the top 30 qualified applicants for each workshop.

Mozilla will not cover anyone’s travel or accommodation for these events.

Mozilla UXWhy Do We Conduct Qualitative User Research?

The following post is based on a talk I presented at MozFest about interviewing users.

I recently had a conversation with a former colleague who now works for a major social network. In the course of our conversation this former colleague said to me, “You know, we have all the data in the world. We know what our users are doing and have analytics to track and measure it, but we don’t know why they do it. We don’t have any frameworks for understanding behavior outside of what we speculate about inside the building.”

In many technology organizations, the default assumption of user research is that it will be primarily quantitative research such as telemetry analyses, surveys, and A/B testing. Technology and business organizations often default to a positivist worldview and subsequently believe that quantitative results that provide numeric measures have the most value. The hype surrounding big data methods (and the billions spent on marketing by vendors making certain you know about their enterprise big data tools) goes hand-in-hand with the perceived correctness of this set of assumptions. Given this ecosystem of belief, it’s not surprising that user research employing quantitative methods is perceived by many in our industry as the only user research an organization would need to conduct.

I work as a Lead User Researcher on Firefox. While I do conduct some quantitative user research, the focus of most of my work is qualitative research. In the technology environment described above, the qualitative research we conduct is sometimes met with skepticism. Some audiences believe our work is too “subjective” or “not reproducible.” Others may believe we simply run antiquated, market research-style focus groups (for the record, the Mozilla UR team doesn’t employ focus groups as a methodology).

I want to explain why qualitative research methods are essential for technology user research because of one well-documented and consistently observed facet of human social life: the concept of homophily.

This is a map of New York City based purely on the ethnicity of residents. Red is White, Blue is Black, Green is Asian, Orange is Hispanic, Yellow is Other, and each dot is 25 residents. Of course, there are historical and cultural reasons for the clustering, but these factors are part of the overall social dynamic. https://www.flickr.com/photos/walkingsf/

This is a map of New York City based on the ethnicity of residents. Red is White, Blue is Black, Green is Asian, Orange is Hispanic, Yellow is Other, and each dot is 25 residents. Of course, there are historical and cultural reasons for the clustering, but these factors are part of the overall social dynamic.
Source: https://www.flickr.com/photos/walkingsf/

Homophily is the tendency of individuals to associate and bond with similar others (the now classic study of homophily in social networks). In other words, individuals are more likely to associate with others based on similarities rather than differences. Social scientists have studied social actors associating along multiple types of ascribed characteristics (status homophily) including gender, ethnicity, economic and social status, education, and occupation. Further, homophily exists among groups of individuals based on internal characteristics (value homophily) including values, beliefs, and attitudes. Studies have demonstrated both status and value homophilic clustering in smaller ethnographic studies and larger scale analyses of social network associations such as political beliefs on Twitter.

Photos on Flickr taken in NY by tourists and locals. Blue pictures are by locals. Red pictures are by tourists. Yellow pictures might be by either. Source: https://www.flickr.com/photos/walkingsf

Photos on Flickr taken in NY by tourists and locals. Blue pictures are by locals. Red pictures are by tourists. Yellow pictures might be by either. Source: https://www.flickr.com/photos/walkingsf

I bring up this concept to emphasize how those of us who work in technology form our own homophilic bubble. We share similar experiences, information, beliefs, and processes about not just how to design and build products and services, but also in how many of us use those products and services. These beliefs and behaviors become reinforced through the conversations we have with colleagues, the news we read in our press daily, and the conferences we attend to learn from others within our industry. The most insidious part of this homophilic bubble is how natural and self-evident the beliefs, knowledge, and behaviors generated within it appears to be.

Here’s another fact: other attitudes, beliefs, and motivations exist outside of our technology industry bubble. Many members of these groups use our products and services. Other groups share values and statuses that are similar to the technology world, but there are other, different values and different statuses. Further, there are values and statuses that are radically different from ours so as to be not assumed in the common vocabulary of our own technology industry homophilic bubble. To borrow from former US Secretary of Defense, Donald Rumsfeld, “there are also unknown unknowns, things we don’t know we don’t know.”

This is all to say that insights, answers, and explanations are limited by the breadth of a researcher’s understanding of users’ behaviors. The only way to increase the breadth of that understanding is by actually interacting with and investigating behaviors, beliefs, and assumptions outside of our own behaviors, beliefs, and assumptions. Qualitative research provides multiple methodologies for getting outside of our homophilic bubble. We conduct in situ interviews, diary studies, and user tests (among other qualitative methods) in order to uncover these insights and unknown unknowns. The most exciting part of my own work is feeling surprised with a new insight or observation of what our users do, say, and believe. In research on various topics, we’ve seen and heard so many surprising answers.

There is no one research method that satisfies answering all of our questions. If the questions we are asking about user behavior, attitudes, and beliefs are based solely on assumptions formed in our homophilic bubble, we will not generate accurate insights about our users no matter how large the dataset. In other words, we only know what we know and can only ask questions framed about what we know. If we are measuring, we can only measure what we know to ask. Quantitative user research needs qualitative user research methods in order to know what we should be measuring and to provide examples, theories, and explanations. Likewise, qualitative research needs quantitative research to measure and validate our work as well as to uncover larger patterns we cannot see.

An example of quantitative and qualitative research working iteratively.

An example of quantitative and qualitative research working iteratively.

It is a disservice to users and ourselves to ask only how much or how often and to avoid understanding why or how. User research methods work best as an accumulation of triangulation points of data in a mutually supportive, on-going inquiry. More data points from multiple methods mean deeper insights and a deeper understanding. A deeper connection with our users means more human-centered and usable technology products and services. We can only get at that deeper connection by leaving the technology bubble and engaging with the complex, messy world outside of it. Have the courage to feel surprised and your assumptions challenged.

(Thanks to my colleague Gemma Petrie for her thoughts and suggestions.)

hacks.mozilla.orgIntroducing SIMD.js

SIMD stands for Single Instruction Multiple Data, and is the name for performing operations on multiple data elements together. For example, a SIMD add instruction can add multiple values, in parallel. SIMD is a very popular technique for accelerating computations in graphics, audio, codecs, physics simulation, cryptography, and many other domains.

In addition to delivering performance, SIMD also reduces power usage, as it uses fewer instructions to do the same amount of work.

SIMD.js

SIMD.js is a new API being developed by Intel, Google, and Mozilla for JavaScript which introduces several new types and functions for doing SIMD computations. For example, the Float32x4 type represents 4 float32 values packed up together. The API contains functions to operate on those values together, including all the basic arithmetic operations, and operations to rearrange, load, and store such values. The intent is for browsers to implement this API directly, and provide optimized implementations that make use of SIMD instructions in the underlying hardware.

The focus is currently on supporting both x86 platforms with SSE and ARM platforms with NEON. We’re also interested in the possibility of supporting other platforms, potentially including MIPS, Power, and others.

SIMD.js is originally derived from the Dart SIMD specification, and it is rapidly evolving to become a more general API, and to cover additional use cases such as those that require narrower integer types, including Int8x16 and Int16x8, and saturating operations.

SIMD.js is a fairly low-level API, and it is expected that libraries will be written on top of it to expose higher-level functionality such as matrix operations, transcendental functions, and more.

In addition to being usable in regular JS, there is also work is underway to add SIMD.js to asm.js too, so that it can be used from asm.js programs such those produced by Emscripten. In Emscripten, SIMD can be achieved through the built-in autovectorization, the generic SIMD extensions, or the new (and still growing) Emscripten-specific API. Emscripten will also be implementing subsets of popular headers such as <xmmintrin.h> with wrappers around the SIMD.js APIs, as additional ways to ease porting SIMD code in some situations.

SIMD.js Today

The SIMD.js API itself is in active development. The ecmascript_simd github repository is currently serving as a provision specification as well as providing a polyfill implementation to provide the functionality, though of course not the accelerated performance, of the SIMD API on existing browsers. It also includes some benchmarks which also serve as examples of basic SIMD.js usage.

To see SIMD.js in action, check out the demo page accompanying the IDF2014 talk on SIMD.js.

The API has been presented to TC-39, which has approved it for stage 1 (Proposal). Work is proceeding in preparation for subsequent stages, which will involve proposing something closer to a finalized API.

SIMD.js implementation in Firefox Nightly is in active development. Internet Explorer has listed SIMD.js as “under consideration”. There is also a prototype implementation in a branch of Chromium.

Short SIMD and Long SIMD

One of the uses of SIMD is to accelerate processing of large arrays of data. If you have an array of N elements, and you want to do roughly the same thing to every element in the array, you can divide N by whatever SIMD size the platform makes available and run that many instances of your SIMD subroutine. Since N can can be very large, I call these kind of problems long SIMD problems.

Another use of SIMD is to accelerate processing of clusters of data. RGB or RGBA pixels, XYZW coordinates, or 4×4 matrices are all examples of such clusters, and I call problems which are expressed in these kinds of types short SIMD problems.

SIMD is a broad domain, and the boundary between short and long SIMD isn’t always clear, but at a high level, the two styles are quite different. Even the terminology used to describe them features a split: In the short SIMD world, the operation which copies a scalar value into every element of a vector value is called a “splat”, while in the long vector world the analogous operation is called a “broadcast”.

SIMD.js is primarily a “short” style API, and is well suited for short SIMD problems. SIMD.js can also be used for long SIMD problems, and it will still deliver significant speedups over plain scalar code. However, its fixed-length types aren’t going to achieve maximum performance of some of today’s CPUs, so there is still room for another solution to be developed to take advantage of that available performance.

Portability and Performance

There is a natural tension in many parts of SIMD.js between the desire to have an API which runs consistently across all important platforms, and the desire to have the API run as fast as possible on each individual platform.

Fortunately, there is a core set of operations which are very consistent across a wide variety of platforms. These operations include most of the basic arithmetic operations and form the core of SIMD.js. In this set, little to no overhead is incurred because many of the corresponding SIMD API instructions map directly to individual instructions.

But, there also are many operations that perform well on one platform, and poorly on others. These can lead to surprising performance cliffs. The current approach of the SIMD.js API is to focus on the things that can be done well with as few performance cliffs as possible. It is also focused on providing portable behavior. In combination, the aim is to ensure that a program which runs well on one platform will likely run and run well on another.

In future iterations of SIMD.js, we expect to expand the scope and include more capabilities as well as mechanisms for querying capabilities of the underlying platform. Similar to WebGL, this will allow programs to determine what capabilities are available to them so they can decide whether to fall back to more conservative code, or disable optional functionality.

The overall vision

SIMD.js will accelerate a wide range of demanding applications today, including games, video and audio manipulation, scientific simulations, and more, on the web. Applications will be able to use the SIMD.js API directly, libraries will be able to use SIMD.js to expose higher-level interfaces that applications can use, and Emscripten will compile C++ with popular SIMD idioms onto optimized SIMD.js code.

Looking forward, SIMD.js will continue to grow, to provide broader functionality. We hope to eventually accompany SIMD.js with a long-SIMD-style API as well, in which the two APIs can cooperate in a manner very similar to the way that OpenCL combines explicit vector types with the implicit long-vector parallelism of the underlying programming model.

Software CarpentryWhy We Don't Teach Testing (Even Though We'd Like To)

If you haven't been following Lorena Barba's course on numerical methods in Python, you should. It's a great example of how to use emerging tools to teach more effectively, and if we ever run Software Carpentry online again, we'll do it her way. Yesterday, though, when she posted this notebook, I tweeted, "Beautiful... but where are the unit tests?" In the wake of the discussion that followed, I'd like to explain why we no longer require people to teach testing as part of the Software Carpentry core, and then ask you all a favor.

To begin with, though, I should make three things clear. First, I believe very strongly that testing is a key software development practice—so much so that I'm very reluctant to use any library that doesn't come with a suite of tests. Second, I believe that scientific software is just as testable as any other kind of software, and that a lot of scientists test their software well. Third, I think it's great that several of our instructors do still teach testing, and I'd like to see it back in the core some day.

So why was testing taken off the list of topics that must be taught in order for a workshop to be called "Software Carpentry"? The answer is that our lessons weren't effective: while most learners adopted shell scripting, started writing functions, and put their work under version control after a workshop, very few started writing unit tests.

The problem isn't the concept of unit testing: we can explain that to novices in just a couple of minutes. The problem isn't a lack of accessible unit testing frameworks, either: we can teach people Nose just as soon as they've learned functions. The problem is what comes next. What specific tests do we actually teach them to write? Every answer we have (a) depends on social conventions that don't yet exist, and (b) isn't broadly appealing.

For example, suppose we wanted to test the the Python 3 entry in the n-body benchmark game. The key function, advance, moves the system forward by a single time step. It would be pretty easy to construct a two-body system with a unit mass at the origin and another mass one AU away, figure out how far each should move in a single day, and check that the function got the right answer, but anything more complicated than that runs into numerical precision issues. At some point, we have to decide whether the actual answer is close enough to the expected answer to count as a pass. The question learners ask (quite reasonably) is, "How close is close enough?"

My answer was, "I don't know—you're the scientist." Their response was, "Well, I don't know either—you're the computer scientist." Books like these aren't much help. Their advice boils down to, "Think carefully about your numerical methods," but that's like telling a graphic designer to think carefully about the user: a fair response is, "Thanks—now can you please tell me what to think?"

What I've realized from talking with people like Diane Kelly and Marian Petre is that scientific computing doesn't (yet) have the cultural norms for error bars that experimental sciences have. When I rolled balls down an inclined plane to measure the strength of the earth's gravity back in high school, my teacher thought I did (suspiciously) well to have a plus or minus of only 10%. A few years later, using more sophisticated gear and methods in a university engineering class, I wasn't done until my answers were within 1% of each other. The difference between the two was purely a matter of social expectations, and that's true across all science. (As the joke goes, particle physicists worry about significant digits in the mantissa, while astronomers worry about significant digits in the exponent, and economists are happy if they can get the sign right...)

The second problem is the breathtaking diversity of scientific code. Scientific research is highly specialized, which means that the tests scientists write are much less transferable or reusable than those found in banking, web development, and the like. The kinds of tests we would write for a clustering algorithm will be very different from those we'd write for a fluid dynamics simulation, which would in turn be different from those we would write for a program that flagged cancerous cells in microscope images or one that cleaned up economic data from the 1950s.

For example, Lorena Barba commented on an earlier version of this post by saying:

You reference in your post our lesson on the full (nonlinear) phugoid model...

If you notice there, and also in the earlier lesson on the simpler linear model, we introduce grid-convergence analysis—a methodical way of executing code verification in numerical computing. This is not common: hardly any beginner course in numerical methods will cover observed order of convergence in this way. I believe this is the right approach: we are emphasizing a technique that should be used in practice to show evidence that the code is computing a numerical solution that converges as expected with grid refinement.

That's another example of what makes Lorena's course great, but (a) the testing method isn't something that a microbiologist or economist would ever use, and (b) that notebook also includes this:

The order of convergence is p = 1.014

See how the observed order of convergence is close to 1? This means that the rate at which the grid differences decrease match the mesh-refinement ratio. We say that Euler's method is of first order, and this result is a consequence of that.

How far away from 1.0 would the order of convergence have to be in order for someone to suspect a bug in the code? 1.1? 1.5? 2.0? Or should 1.014 itself be regarded with suspicion? Any test, automated or otherwise, must answer that question, but those answers are going to vary from domain to domain as well.

In theory, we can solve this by writing different lessons for different communities. In practice, that requires more resources than we have, and we'd still have to decide what to do in a room containing economists, microbiologists, and cosmologists.

I believe we can teach software testing to scientists, but I also believe that we have some work to do before we can do it effectively enough for most of our learners to put it back in Software Carpentry's core. What we can do to bring that day closer is start amassing examples of tests from different domains that include explanations of why: why these tests, and why these tolerances? You can see my attempt at something like this here, but that example deliberately doesn't use floating point so that the question of error bars didn't arise.

So here's my challenge. I'd like you to write some unit tests for the advance function in the n-body benchmark game and then share those tests with us. I don't care what language you use (source is available in several), or which unit testing framework you pick. What I want to know is:

  1. Why did you choose the tests you chose, i.e., what kinds of errors are those tests probing for?
  2. How did you pick your margin of error?

You can send us your tests any way you want, and I will happily send Software Carpentry t-shirts to the first half-dozen people to do so.

My thanks to Lorena Barba, Matt Davis, Justin Kitzes, Ariel Rokem, Fernando Pérez for feedback on an earlier draft of this post.

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Weekly coordination meeting for Firefox Desktop & Android product planning between Marketing/PR, Engineering, Release Scheduling, and Support.

about:communityGrow Mozilla discussion this Thursday

If you’re interested in helping new people get involved with Mozilla, join us Thursday for an open community building forum.

Air MozillaTechWomen Emerging Leader Presentations

TechWomen Emerging Leader Presentations As part of the TechWomen program, Emerging Leaders from Cameroon, Lebanon and Kenya will give short lightning talk presentations on their work and professional areas...

Software CarpentryPandoc and Building Pages

Long-time readers of this blog and our discussion list will know that I'm unhappy with the choices we have for formatting our lessons. Thanks to a tweet from Karl Broman, I may have an answer. It's outlined below, and I'd be grateful for comments on usability and feasibility.

Here's a summary of the forces we need to balance:

  1. People should be able to write lessons in Markdown. We choose Markdown rather than LaTeX or HTML because it's easier to read, diff, and merge; we choose it rather than AsciiDoc or reStructuredText (reST) because it's much better known.
  2. People should be able to preview their lessons locally before publishing them, both to avoid embarrassment and because many people compose offline.
  3. Lessons should be easy to write and read. We shouldn't require people to put div's and other bits of HTML in their Markdown.
  4. It should be easy to add machine-comprehensible structure to lessons. We want to be able to build tools to extract lesson titles, count challenge exercises, etc., all of which requires machine-comprehensible source. This is in tension with the point above: everything we do to make lessons more readable to computers means extra work or less readbility for people.
  5. We should use only off-the-shelf tools. We don't want to have to build, document, and maintain custom plugins for formatting tools. We do want to use GitHub's gh-pages magic.
  6. The workflow for creating and publishing lessons should be authentic, i.e., the way people write and publish lessons should be a way they might use to write and publish research papers.

And here's the proposal:

  1. We stop relying on Jekyll and start using Pandoc instead.
  2. Every lesson is stored in a GitHub repository that has a gh-pages branch. (GitHub will automatically publish the files in that branch as a mini-website.)
  3. The root directory of that repository contains:
    • a README.md file with a one-liner about the lesson's content and authorship;
    • a sub-directory called src that contains the source files for the lesson;
    • the compiled versions of those files; and
    • an empty file called .nojekyll to tell GitHub that we don't want it to run Jekyll.
  4. The src directory contains all the source files for the lesson, and a simple Makefile that uses Pandoc instead of Jekyll to compile those files. Pandoc's output goes in the root directory, i.e., one level above the src directory, and the Makefile makes sure that other files (CSS, images, etc.) are copied up as well.
  5. When an author makes a change, she must build locally, then commit those files to the GitHub repository. Yes, this means that generated files are stored in version control, which is normally regarded as a bad idea. But it does mean we can use Pandoc, which supports a nicer dialect of Markdown than Jekyll on GitHub, and we don't have to worry about compiling files on one branch and committing them to another.

I've created a proof-of-concept repository to show what this might look like in practice. It seems to work pretty well, and I think it satisfies the "authentic workflow" requirement (though I'd be grateful if others could tell me it doesn't). The only usability hiccup I can see is that authors will have to remember to commit the generated files: my usual workflow of git add -A followed by git commit -m only adds files in or below the current working directory, so I would have to cd .. up from src to the root directory of their local copy of the repo first.

One variation on this raised by Trevor King is to keep the source files in the root directory of the master branch, and have the lesson maintainer merge changes into the src directory of the gh-pages branch and do the build. This frees authors from having to install the build tools—only the maintainers need that—but on balance, I think most people will want to preview before uploading, so the savings will be mostly theoretical.

If you have other thoughts, or can suggest other improvements, please add comments to this post. We'd particularly like to hear from people who aren't Git experts or aren't familiar with HTML templating systems, Makefiles, and the like. Does the workflow described above make sense? If not, what do you think would go wrong where, and why?

Air MozillaIntern Presentation: Alex Bardas

Intern Presentation: Alex Bardas The presentation is about different bugs and features I've implemented related to e10s, awesomebar, tracking protection, search suggestions, newtab performance and hotfix add-ons.

Software CarpentryWhy Software Matters

Why does software matter to scientists? It may seem obvious to people who read this blog, but that's like saying that the answer to, "Why opera?" is obvious to the sort of person who pays a month's rent to get a decent seat at Covent Garden. Why does software matter? And why does it matter whether it's written well?

"It's the only way to tackle today's big questions" is a popular answer to the first question, while "We need to know if we can trust it" is a common response to the second, but I think both miss the point. Open source, open science, open access, open data... they're just enablers. The real prize is massive, open collaboration. It's meeting people you would otherwise never have met who can extend your work in ways you never thought of and then share their results so that others can take it even further.

But even that is just a means to an end. Software and science may be what we collaborate on, but they're also what brings us together, just as cleaning up a park can be what brings a neighborhood together. That's why I think groups like the Software Sustainability Institute matter so much. They are helping scientists (and through them, everyone else) build the technical and cultural muscle they will need when climate change, mass extinction, resource depletion, drug-resistant diseases, and all the other problems we're so resolutely not addressing right now can't be avoided any longer.

Air MozillaMentorship Program Relaunch

Mentorship Program Relaunch Learn how to use Mentorship and KitHerder to help grow and train your contributor base!

WebmakerMozFest 2014: We made this together!

MozFest logo copy

We arrived as individuals, we left as a community.

More than 1,600 educators, community-builders, technologists and creators met in London from October 24-26, 2014 for MozFest, a collaborative festival dedicated to innovating for the open web.

Over the course of three days, attendees from 50+ countries participated in hundreds of hands-on sessions exploring topics ranging from opportunities for the mobile web and digital literacy; to journalism; science; arts, culture and music on the web.

Friendships were forged, ideas emerged, prototypes were hacked. Here’s a taste of what we worked on together at MozFest 2014:

Imagining Applications for BRCK

brick

BRCK is a device that allows connectivity in places where it wouldn’t normally be—it’s durable; portable; can be powered with, for example, solar cells; and uses the 3G network. It inspired a number of ideas at MozFest including:

  •  @SteveALee from GPII attended a BRCK session because he’s interested in using the technology to address accessibility issues, especially cognitive disabilities and digital literacy.
  • Mozilla Rep, Andre Garzia, saw opportunities to combine BRCK with MozStumbler, an app that lets people use their devices to contribute to a database of location data that’s used to enable context-aware applications.
  • The Mozilla Appmaker team thought it would be awesome to load Webmaker onto BRCK and take it to communities as a new product for increasing digital literacy and to enable people to create their own apps.

Mozfest’s First Youth Zone

EPIKYouthzone is a digital literacy initiative designed for young people ages 8-25 that started in Kent, UK but is now taking their work global. The EPIK team setup and wrangled the first MozFest “Youth Zone” this year and it was a huge success: 
  • All weekend the Youth Zone was one big Maker Party for kids. The best part? They used a peer-to-peer, kid-to-kid teaching model so by the end of the weekend it was kids doing the mentoring to their new maker peers.
  • Young  mentors and facilitators used Mozilla’s Webmaker training to learn about the web literacy map and Minecraft based activities to engage new learners and makers at the festival. 
  • EPIK used this year’s MozFest to develop a new model for teaching young people at events worldwide. In 2015, they have plans to take the “youth zone” concept to Turkey, Poland and the US.

Brainstorming How to Use SAM Wireless Sensors

SAM

SAM is a set of building blocks with sensors and motors that are connected wirelessly. The SAM app allows you to connect them in interesting ways, and helps you learn to code if you don’t already know how.  MozFest participants played with SAM while learning about its potential applications including:

  • Elderly care: using the accelerometer and gyroscope modules, a nurse will know if someone is turning in bed or roaming around the room.
  • Fun with proximity sensors: imagine pairs of shoes with spray paint cans on them. When two people wearing shoes get near each other, the shoes spray paint.
  • Smart fridges: put a pressure sensor under a milk bottle. When you’re at the store, it can remind you that you need milk.

 

Learning to Make Personal Apps with Appmaker

Appmaker

Appmaker is a free, open source tool developed by the Mozilla community for creating personal mobile apps, even if you  don’t know code. Appmaker allows you to  combine individual bricks to create and share custom mobile apps right in your web browser. During MozFest:

  • Youth participant, Dhiresh Nathwani, created his first game using Appmaker’s “Chef Adventure Game” brick. Check it out here!
  • Laura de Reynal and Jan Chipchase lead a group of participants in planning a Community Challenge for people who want to conduct local research to inform the development of the Appmaker project.

 

Rethinking the Browser

firefox-bookmark-bandwidth-xl-200x300
Matthew Willse, a developer & designer with Mozilla’s Webmaker project, facilitated a MozFest session called, Rethink the Browser to imagine new ways to peek beyond the surface of a web site to enable smarter, safer browsing on the mobile web.
Session participants generated lots of ideas, in part focusing on data usage — a costly and limited resource for many people around the world. The problem: we don’t know how much data we have used until after we use it. Session participants designed an intervention that gives users more information before they visit a site, and agency to preserve their bandwidth. Using bookmarks and top sites listed in Firefox Mobile, they built a system to indicate whether each site is more or less resource intensive (red to green).
If you’re interested in getting involved, the idea needs  a bit more work to improve the indicators and interaction, and to map the relative scale of resources and bandwidth.   Contact Matthew on Twitter @mw to help. More info in Matthew’s blog.

 

Encouraging Future Digital Leaders at the Maker Party

Maker PartySaturday’s Maker Party was high-energy event with dozens of hands-on opportunities to learn new digital skills. It was open to makers and learners of all ages but one group stuck out: the Digital Leaders program from the Staplehurst School in the UK. 
  • The Digital Leaders program is an opportunity for 9-12 year old kids to teach their fellow students, other adults and even their teachers about key digital skills. 
  • These amazing students recently led an open source activity for their teachers and the weekend before the MozFest held one of the UK’s biggest Maker Parties with over 300 participants. At MozFest they taught hundreds of attendees about stop motion animation by making short movies right there on the spot. 
  • Inspired by their time at MozFest they are making digital literacy a key part of their educational program and are looking to reach more kids by leading more Maker Parties year round.

 

Creating Learning Guides for Community Makers

NAte

Erhardt Graeff and Nathan Matias from the MIT Center for Civic Media co-facilitated this session with the goal of creating a learning guide for other community-focused makers.

The group explored the question “What do we mean by civic and community-focused making?” and participants shared challenges based on their own experiences.

Several participants committed to refining a template meant to emphasize the community-focused goals and specific context of any initiative or workshop, evaluating the learning guides, and creating ways that organizers who try the guides can share back what they learn. To get involved, contact @erhardt or @natematias. Read more in Erhardt’s blog.

 

Empowering Local Communities with Snapp

Snapp is a simple and intuitive tool for creating mobile apps, without needing any technical knowledge. Snapp has already been used in Senegal to educate people about the Ebola virus. More than 12,000 people accessed life-saving information by downloading the app in one of 18 local dialects. Co-founder Gabriel Gurovich said he hopes Snapp can empower local communities to create solutions to their particular problems. MozFest participants got an early invite to Snapp and brainstormed how they might use the tool to solve problems in their own communities.

 

Exploring New Hives

Hives Learning Networks Hive notes are comprised of organizations and individuals who create learning  opportunities for young people within and beyond the confines of the  traditional classroom experience. Many of the participants in the Hive sessions were there to explore how they can bring the Hive model to their communities. Sanda Htyte is a Hive and MozFest veteran who was there to explore  how she could start a Hive learning network in her new home of  Seattle: 
  • Sanda has worked with Hive NYC since 2008 as part of the Radio Rookies program at WNYC, teaching kids media literacy skills. 
  • She  recently relocated to Seattle where she quickly recognized a gap in digital literacy opportunities for kids. She’s working with groups like the Pacific Science Center and the central branch of the Seattle Public Library to add new programming aimed at youth. This summer she worked  with Geek Girls Carrots to hold Seattle’s first Maker Party.
  • At  MozFest, Sanda worked with others from the Hive Global network to find out what works and what doesn’t in building a new Hive, connecting to other successful Hives and organizations and is headed back to Seattle with all the knowledge and tools needed to start developing a new Hive in her community.

Read about even more projects from MozFest 2014.

Get Involved:

WebmakerMozFest 2014: Photos from the Demo Party

MozFest logo copy

MozFest_26Oct_415

MozFest_26Oct_416

MozFest_26Oct_383

MozFest_26Oct_368

MozFest_26Oct_372

MozFest_26Oct_359

MozFest_26Oct_411

Get Involved:

Software CarpentryLost in Space

You probably haven't seen the 1998 movie Lost in Space, or if you have, you've suppressed the memory—it was awful. But I do know one guy who enjoyed it. His name was Joe, and he had worked on the software used to create its special effects. Ten minutes into the film he took out his Walkman (a primitive form of iPod), put on his headphones, and spent the next two hours head-bobbing to a mix of Bob Marley and Smashing Pumpkins.

Joe enjoyed the movie because he was looking for something different in it than the rest of us. Luckily for him, the story and dialog were irrelevant; what he cared about was edge glitches in spheroid lighting effects, which made the after-movie discussion with the rest of us rather surreal.

Something similar happens pretty regularly on our 'discuss' list. This week, for example, Ben Marwick's message about Docker containers generated a thread with a couple of dozen replies (and counting). Some of the discussion is about the mechanics of deploying Docker, resizing windows, and so on; other parts are about what happens in the classroom when you do this and what novices find easy or hard.

It feels in places like two conversations have been interleaved—like some people are looking at the effects and others are thinking about character development. Given our mission, we need to have both, but two things make this complicated.

The first issue is the disparity in the social license to comment between education and technology. People generally accept that non-experts are entitled to have and express opinions on some topics, such as foreign policy, how usable different programming languages are, and what the local sports team should have done in last night's game. In other areas, though, such as virtualization or heart surgery, there's an implicit social agreement that only the cognoscenti should speak out.

Teaching pretty clearly falls into the first category: people who have never encountered the literature on educational psychology still feel they have something to offer based on personal experience. And they're right: when analyzed systematically, their (qualitative) anecdotes can provide insights every bit as much as statistics. The key word in that sentence is "systematically", though, and this is where I most keenly feel the gap between what Software Carpentry is and what it could be.

Second, most of us (myself included) know more about technology than education, and therefore tend to gravitate toward the former. But as one person on the list said to me privately over the summer, that means that many of our discussions of teaching wind up going down a technical rabbit hole, leaving the less technically inclined behind.

The Japanese math teachers that Green described in Building a Better Teacher have only half the problem we do: they "only" need to figure out how to teach math, not how it works. The trick for us is to find a way to mix discussion of both sides of that coin in a way that keeps everyone engaged.

After the movie was (finally, thankfully) over, we all went across the street for pizza. The conversation that followed moved back and forth between "how do you think they did that smoke effect?" to "wow, even when Gary Oldman phones in a performance, he's pretty creepy" because we were graphics programmers as well as movie fans and didn't see any reason not to think both ways at once. I think that's a pretty good model for Software Carpentry; I think that asking "what's the tech?" when someone says "here's the teaching" and "what's the teaching?" when someone says "here's the tech" helps us all learn more about both.

Coincidentally, as I was wrapping up this article I received a link to a paper by Zagalsky et al called "The Emergence of GitHub as a Collaborative Platform for Education" that quotes some earlier posts from this blog. The discussion of what people are doing, both technically and pedagogically, is another good example of the kind of interleaved discussion that we have been having about Docker.

Software CarpentryBritish Library Courses

I had a chance to catch up with James Baker at the British Library on Friday, and discovered that they're running an amazing series of short classes on digital skills for librarians. With his permission, I've posted their outline below, along with a few excerpts from their FAQ. Some of it is site-specific, but I think a lot would be relevant elsewhere. (Programming in Libraries was developed alongside writing these lessons for the Programming Historian and tweaked for the library audience; interested readers should also check out the practical elements of Managing Personal Digital Research Information work through this public wiki.) If you'd like more information, please mail digitalresearch@bl.uk.

The Digital Scholarship Training Programme is a Digital Research team initiative to provide staff with the opportunity and space to delve into the key concepts, methods and tools that define today's digital scholarship practice.

Programming in Libraries
This course provides an introduction to querying, transforming and mining research data using the command line.
Digital Storytelling
This course explores ways in which a variety of digital technologies can be combined to tell stories about our collections that can uncover novel perspectives and engage new audiences.
This is Digital Scholarship
This course takes a thought-provoking look at how information technology has transformed research today, familiarizing you with the concepts, methods and tools that define digital scholarship.
Metadata for Electronic Resources
This course surveys current standards for describing and encoding digital artefacts. It covers digital formats for describing the contents and contexts of artefacts for use in libraries, archives, and online repositories including Dublin Core and OAI as well as their expression in different mark-up languages such as HTML, XML and RDF.
Managing Personal Digital Research Information
This course takes a look at that research management software and how it developed from supporting bibliographic uses to become repositories for research notes, digital objects, and collaborative projects that can be searched, tagged, interrogated, networked, and shared.
Communicating our Collections Online
This course explores opportunities for sharing our collection images online and on external platforms such as Europeana, Wikimedia Commons and Flickr in the context of the Library's Access and Reuse Policy.
Digitisation at the British Library
This course covers lifecycle digitisation project planning and the process for embarking on digitisation at the British Library including considerations of copyright, metadata, preservation and access issues.
Geo-referencing and Digital Mapping
Through guided tasks and short talks, this course explores how geo-referencing and geo-tagging can be used to display content in innovative, research orientated, and user-friendly ways. It shows how digital mapping techniques produce maps that can be queried, layered, and presented in novel and unexpected ways.
Information Integration: Mash-ups, API's and Linked Data
This course introduces the fundamentals of information integration and sharing from web mash-ups and API's to semantic web/linked open data technology and how these technologies are being used to communicate and connect collections online.
Cleaning up Data
This workshop aims to help you uncover hidden datasets and to gain the skills to clean and arrange those datasets in ways which makes them more accessible for further analysis. The day primarily consists of a hands-on guided introduction to getting started with OpenRefine.
Digital Collections at the British Library
This course provides an overview of the present and future landscape of our digitised collections with a focus on how they are acquired, ingested, preserved, and made available.
Crowdsourcing in Libraries, Museums and Cultural Heritage Institutions
Libraries, archives and museums have a long history of participation and engagement with members of the public. This course provides an understanding of what crowdsourcing is and the different types of crowdsourcing activities that are used in a cultural heritage context.
Data Visualisation for Scholarly Analysis
This workshop provides an overview of a variety of techniques and tools available for data visualization and analysis across the humanities and sciences.

FAQ:

What is the Digital Scholarship Training Programme?
A programme, launched Autumn 2012, of one-day courses for staff across the Library to have the opportunity and space to delve into the key concepts, methods and tools that define today's digital scholarship practice. Based on consultation with internal stakeholders as well as those in the HE, Cultural Heritage and IT sectors, the Digital Research Team have designed this unique programme to address the training needs of staff at the British Library. A team of first-class instructors from institutions such as the Open University, University of Sheffield, City University London, and the British Library has been assembled to deliver this learning programme.
Who are the courses aimed for?
Courses are designed to be introductory and are aimed at 'Intelligent Novices', that is, colleagues who have heard about the concepts but haven't had the time, space or opportunity to really explore them in depth. It is very important to us that they be inclusive and accessible, challenging but not terrifying.
What will my day look like?
Most courses unless otherwise stated are designed to be a full-day experience and comprised of both a lecture and practical hands-on experience in order to provide staff with the best opportunity to fully engage with the topic. Colleagues will be provided with two short tea breaks and an hour break for lunch. Unfortunately catering cannot be provided.
Will there be tests?
No! But we do want to ensure that you get the most out of the day so there will typically be hands-on group activities that will give you the opportunity to apply your new knowledge. We will also ask for you to complete an anonymous self-assessment feedback form which will let us know how much you feel your knowledge of the topic has progressed from beginning to end of the course. This helps us to identify skills gaps that might need filling in the next semester's offering.

WebmakerMozFest 2014: Closing Keynotes

MozFest logo copy

MozFest 2014 Closing Keynotes

Speakers at Sunday morning’s closing keynote session:

    • Filmmaker and education activist Baroness Beeban Kidron
    • Mozilla Chairwoman Mitchell Baker’s
    • Dan Sinker and Erika Owens from Knight-Mozilla OpenNews
    • Dave Steer from Mozilla Advocacy

 

Get Involved:

Air MozillaMozilla Festival 2014 - Closing Comments

Mozilla Festival 2014 - Closing Comments MozFest 2014 Session Close from Ravensbourne College in London. Closing, comments by Allen Gunn, Dia Bondi and Mozilla Chairwoman Mitchell Baker. MozFest is Mozilla's annual...

WebmakerMozFest 2014: Photos from Sunday Keynotes & Sessions

MozFest logo copy

MozFest_26Oct_001

MozFest_26Oct_007

MozFest_26Oct_009

MozFest_26Oct_017

MozFest_26Oct_023

MozFest_26Oct_065

MozFest_26Oct_086

MozFest_26Oct_114

MozFest_26Oct_128

MozFest_26Oct_148

MozFest_26Oct_171

MozFest_26Oct_177

MozFest_26Oct_268

MozFest_26Oct_183

MozFest_26Oct_185

MozFest_26Oct_187

MozFest_26Oct_213

MozFest_26Oct_223

MozFest_26Oct_253

MozFest_26Oct_260

MozFest_26Oct_263

MozFest_26Oct_204

Get Involved:

WebmakerMozFest 2014: Community Lightening Talks

MozFest logo copy

MozFest 2014 Community Lightening Talks

Speakers:

  • Chris Locke
  • Jan Chipchase
  • Laura de Reynal
  • Vineel Pindi
  • Aubrey Anderson
  • Jon Rodgers
  • Avinash Kumar
  • Emma Irwin
  • Leah Gilliam

Get Involved:

Air MozillaMozilla Festival 2014 - Closing Keynotes

Mozilla Festival 2014 - Closing Keynotes MozFest 2014 Sunday Keynotes from Ravensbourne College in London. Keynote Speakers: Filmmaker and education activist Baroness Beeban Kidron; Mozilla Chairwoman Mitchell Baker MozFest is Mozilla's...

WebmakerMozFest 2014: Photos from the Maker Party

MozFest logo copy

Photos from Maker Party!

Energy was high at MozFest when more than 300 children & youth filled the ground floor of Ravensbourne for a giant Maker Party. Robots, Playdough, Raspberry Pi’s…oh my!

MozFest_25Oct_296

MozFest_25Oct_294

MozFest_25Oct_325

MozFest_25Oct_298

MozFest_25Oct_320

MozFest_25Oct_306

MozFest_25Oct_314

MozFest_25Oct_322

Photography by Paul Clarke and Tracy Howl.

Get Involved:

Air MozillaMozilla Festival 2014 - Community Lightning Talks

Mozilla Festival 2014 - Community Lightning Talks MozFest 2014 Community Lightning Talks from Ravensbourne College in London.

WebmakerMozFest 2014: Photos from Saturday’s keynotes & sessions

MozFest logo copy

MozFest 2014: Photos from Saturday’s keynotes, sessions & workshops

MozFest_25Oct_129

MozFest_25Oct_123

MozFest_25Oct_157

MozFest_25Oct_006

MozFest_25Oct_125

MozFest_25Oct_156

MozFest_25Oct_011

MozFest_25Oct_021

MozFest_25Oct_023

MozFest_25Oct_136

MozFest_25Oct_144

MozFest_25Oct_187

MozFest_25Oct_155

MozFest_25Oct_040

MozFest_25Oct_225

MozFest_25Oct_056

MozFest_25Oct_069

MozFest_25Oct_264

MozFest_25Oct_275

MozFest_25Oct_117

MozFest_25Oct_257

MozFest_25Oct_255

MozFest_25Oct_233

Get Involved:

WebmakerMozFest 2014: Science Fair Photos

MozFest logo copy

Science Fair evening reception

The MozFest 2014 Science Fair was an energy-filled collection of innovative prototypes, ideas, hacks and collaborations. See all the photos here. A huge thank you to Paul Clarke and Tracy Howl for helping us with photography!

MozFest_24Oct_126

MozFest_24Oct_150

MozFest_24Oct_174

MozFest_24Oct_189

MozFest_24Oct_176

MozFest_24Oct_139

MozFest_24Oct_143

MozFest_24Oct_163

MozFest_24Oct_178

Get Involved:

Air MozillaMozilla Festival 2014 - Opening Keynotes

Mozilla Festival 2014 - Opening Keynotes MozFest 2014 Opening Keynotes from Ravensbourne College in London.

SUMO BlogWhat’s up with SUMO – 24 October

…a bit delayed, but still on time! Here’s the latest and greatest from SUMO, for your reading and watching pleasure.

New arrivals to SUMO – welcome!

Latest SUMO Community meeting video

Our latest meeting was an hour-long training/Q&A session about Firefox Hello. Watch the video to learn more:

If you want to comment on the video or ask questions regarding the training, please do so in this forum thread. You can find the notes and questions asked and answered here.

Also, please remember that you’re always invited to contribute topics to our Monday meetings! To do so, please start a thread in the SUMO Community Discussions section of the forum, so all members of the community are able to learn about it and participate.

Forum 2.0: Keep the Feedback Coming!

We are still looking for your feedback on the forum redesign. Thanks to all those who already took a look at the new filters and left their comments in the forum thread. There’s a new version available as a preview here and you can let the SUMO devs know what you think on this bug.

Thunderbird Summit Update

Roland attended the Thunderbird Summit and brought back a slew of updates. The most important outcome of the summit is the election of the Thunderbird Council, consisting of seven members. You can read more about the Summit on this wiki page.

Shout-out time: Locasprinters in Paris get the FxOS KB to 100%!

Last weekend, the dedicated and determined localizers in France met in Paris for a Locasprint (it means “let’s localize everything we can!” in French, really!). They worked hard on a variety of things, including the Firefox OS KB, with amazing results. You can read more about the Locasprint (part deux) in this blog post, and I’ll just drop this screenshot here. Congrats and merci beaucoup!


In the meantime, the Mozilla Bangladesh community is not letting go and pushing for a 100% Bengali KB. Go, Bangladesh L10ns!

Regular Mobile Meetings

Starting this month, the mobile meeting is taking place on the last Wednesday of each month, at 10:00 PST (18:00 GMT). For the weekly Firefox OS and Firefox for Android updates, keep following the Community Meetings on Mondays.

On a final note – don’t forget that MozFest 2014 is taking place this weekend in London! Not in London? Not a problem – you can still participate remotely. SUMO will be delivering a session on localizing Webmaker support tomorrow. Here’s a snapshot from the place itself:

blimp_mozfest

More news coming your way next week – have a great weekend!

WebmakerMozFest 2014: The Science Fair

MozFest logo copy

MozFest kicks off with a bang

The traditional kickoff to #MozFest, Mozilla’s annual festival celebrating the open web, is the Science Fair evening reception. This year, more than 900 attendees participated, making it our largest Science Fair ever!

B0u4br_IEAABcaD.jpg_large

Photo @sandamoon

Ravensbourne was buzzing as participants demoed their projects, prototypes, ideas and hacks. People from around the globe mingled with drinks in hand, discussing innovative ideas for making music on the web, learning and teaching code, controlling how our data is shared on the web and more.

MozFest Science Fair projects included:

BRCK,  a self-powered, mobile WiFi device designed in Nairobi, Kenya and intended for use in rural communities. Built by creators of Ushahidi, Crowdmap and the iHub, BRCK is physically robust, able to connect to multiple networks, a hub for all local devices and contains enough backup power to survive a blackout.

brckphoto

 

Serendipidoodle, an app that generates two random words to be drawn. Participants are invited to consider the words, decide if they want to represent them literally or metaphorically, and interpret the words in a doodle. The drawings are then shared on Twitter. For lovers of wordplay and sketching, this is an experience where chance meets art.

 

Erase All Kittens, an innovative online platform game that teaches young people to code and create on the web by encouraging them to hack into levels, written in HTML and CSS in order to complete the game.

erase all kitten photo

 

Terms of Service, by Michael Keller and Josh Neufeld, is a graphic novel explaining our role in the world of Big Data. Produced with AlJazeera America, it’s a gorgeous antidote to the normally dense, jargon-laden coverage of this issue.

Photo ‏@HasitShah

 

#YourFry is a wonderful example of the Internet of Things in action. The publishers of Stephen Fry’s latest book invited the public to create digital hacks of the project. Michael Shorter used conductive ink allowing you to touch different areas on the book’s cover to play different stories from the book in audio format.

Photo @kelli_jo_

 

After a night of socializing, MozFest participants will get their hands dirty tomorrow, meeting in hundreds of interactive sessions, finding kindred spirits to collaborate with, and starting the hands-on process of hacking solutions to some of the most pressing issues on the web.

Get Involved:

WebmakerMozFest 2014: Hive Chicago in the house

MozFest logo copy

Hive Chicago at MozFest

This year, Hive Chicago is fortunate to bring an impressive cohort of your network colleagues and peers to MozFest, nominated and voted on by you to represent us. There are three categories: Moonshot Representatives will be representing the work of our Moonshot working groups over the last few months to identify solutions to the most persistent challenges to enacting Hive Chicago’s goals; Maker Party People have created highly engaging, hands-on, connected learning experiences with digital media and will be showcasing these at a Maker Party; and other travelers who will be leading or attending sessions at MozFest. See more.

MozFest: Arrive with an Idea, Leave with a Community

This “unconference” – hosted by the Mozilla Foundation – is part hackathon, part science fair, part Maker Party, and sometimes just full-on dance party. The festival brings together technologists, educators, journalists, bloggers, developers, and learners to share and revel in their vision of an open web, neigh, an open society, and a truly connected world of learning.

Each year, session proposals are accepted around thematic tracks, that loosely organize and bind some structured activities in a sea of self-guided and self-directed experiences. This year’s tracks include: Hive Learning Networks, Build and Teach the Web, Open Web with Things, The Mobile Web, Source Code for Journalism, Science and the Web, Art and Culture of the Web, Open Badges Lab, Musicians and Music Creators on the Open Web, Policy and Advocacy, and Community Building.

The Hive Learning Networks Track

The Hive track is exactly what you might expect: a community gathering of Hive members and the Hive-curious, people united to create a transcendent learning ecosystem in their respective cities or nations by promoting web literacy, digital media production and creating an openly networked environment of service providers and learning spaces. These are solutionaries, ready to tackle the unique challenges – both obstacles and opportunities – in their local neighborhoods. They are ready to bring innovative programs, projects, platforms, and products to their communities and bring us all one step closer to our goals.

The track consists of meetups, fireside chats, a number of sessions organized as an “Action Incubator”, a Maker Party, and opportunities to showcase our work. We open the festival with a global meetup on Friday night, which convenes a conversation around the Hive goals and the principles of connected learning. Then we dive right in.

Action Incubator

Our Friday evening meetup is immediately followed by the first session in our Action Incubator: Identifying Challenges – participants will identify obstacles that stand in their way of enacting our shared goals and a connected learning ecosystem in their local community, or identify the opportunities that might get them there. Then we will sort these challenges by affinity and reveal shared challenge areas that we all face in this work. Prepared with these shared goals, learning principles and challenge areas in mind, we will go forth and engage with all that MozFest has to offer with a collaborative, unified clarity of purpose.

Sound familiar? This process is modeled after our work in establishing Hive Chicago Moonshots, complemented by the creative insights and work of our sister Hive Networks around the globe. This is where our Moonshot Representatives get to provide guidance and support to the community. As experts in identifying shared challenges and organizing a community to action, the Moonshot Reps will play an important part in connecting and supporting attendees to the Hive Track at MozFest. Their existing Moonshots may also serve to help organize our global peers’ obstacles and opportunities into similar challenge areas.

Science Fair!

Before diving into the busy weekend, MozFest take a moment to reflect and share in a science fair! Participants of every track at MozFest demo work that they bring with them, or ideas they hope to explore, in a cocktail-party-styled showcase that exposes attendees to the incredible diversity of creative endeavors that they can choose to engage with during the rest of the weekend. This is a place to spark action.

We’re excited that Miriam Martinez will be sharing out SCCoogle and Expunge.io, two student-created web app projects aimed at addressing youth needs around school discipline and criminal records. David Bild and Emmanuel Pratt will also be sharing their Mapping Collaborative project as a teaser to their session.

Activating, Brainstorming, Prototyping, Iterating and Getting Feedback

On Saturday morning, we host session 2 of the Action Incubator: Brainstorming Solutions, where participants that need to spend a little more time with the Hive before venturing out will rapid-fire identify prototype solutions to their challenges or opportunities in a fun peer-to-peer exchange. On Sunday morning, we invite everyone, members of a Hive or otherwise, to session 3 of the Action Incubator: Receiving Feedback, where participants will share their the insights and prototypes from their MozFest experience for some peer-to-peer feedback to further develop their ideas.

For our Moonshot Representatives, these two days are a unique opportunity to share the Hive Chicago Moonshots with a diverse, international crowd to see how and if our messages resonate. This is an important test of our work as we head into Hive is Five: can we activate interest and support from a broad community? Taking these Moonshots to sessions around MozFest will also be an opportunity to brainstorm solutions and start to develop prototypes that might be executed over the course of the next few months or years after MozFest. There is a depth of potential resources at MozFest that we should leverage.

A Maker Party Too

Saving the best for last… did I mention the MozFest Maker Party? Oh yeah, we’ve got one of those on Saturday too! The party is hosted by Hive Global, for the young people of London. This is where our Maker Party people come in and bring a gust of our Windy City energy across the pond. I hope these Londoners brought their galoshes, ’cause they’re about to get showered in learning.

Celebrating Our Accomplishments

The weekend comes to a close on Sunday evening with a Global Meetup Redux and a Final Showcase to reconvene our new and not-so-new community to share what we’ve accomplished, learned or created over the last 3 days and identify what’s ready to share with the world, or at least our friends at the festival. The Final Showcase on Sunday evening is less of science fair and more of a circus!

Phew, that’s a busy 48-ish hours.

Get Involved:

WebmakerMozFest 2014: Need help? Find a mentor

MozFest logo copy

New to MozFest? Mentors can help.

Hours before the official kickoff of to MozFest 2014, a group of dedicated Webmaker Mentors and Supermentors gathered with a single mission: help new community members arriving at the festival.

Untitled

Friendly mentors at MozFest 2014

MozFest is a whirlwind of making and learning, but it can be overwhelming if you don’t know where to begin. Mentors are here to help during the festival by:

  • Using a buddy system to pair participants with mentors so they’re able to get the most out of Mozfest.
  • Explaining what’s happening with Webmaker in 2015 and providing opportunities for community members to get involved.
  • Helping participants feel valued and connected.

To connect with a mentor, simply tweet #teachtheweb or visit the Build & Teach The Web track on the 6th floor of Ravensbourne.  Learn more about mentors at webmaker.org/mentor.

Get Involved:

 

 

 

The Mozilla BlogMozFest 2014 begins today

Today marks the beginning of the fifth annual Mozilla Festival, one of the world’s biggest celebrations of the open web. More than 1,600 participants from countries around the globe will gather at Ravensbourne in East London for a weekend of … Continue reading

WebmakerMozFest 2014 begins today

MozFest logo copy

Welcome to MozFest!

Today marks the beginning of the fifth annual Mozilla Festival, one of the world’s biggest celebrations of the open web.

More than 1,600 participants from countries around the globe will gather at Ravensbourne in East London for a weekend of collaborating, building prototypes, designing innovative web literacy curricula and discussing how the ethos of the open web can contribute to the fields of science, journalism, advocacy and more.

MozFest

 

Envisioning the future of the open web

In the next decade, billions more people will be coming online for the first time, largely thanks to the increased accessibility and affordability of mobile devices. There is a growing concern that the web of the future will have little to offer us except closed social networks and media consumption using apps, services and platforms created by a few big players. Additionally, troubling questions are emerging about how our online activity is monitored by governments and corporations. In the face of these threats, it’s crucial that we maintain our freedom, independence and agency as creators of the web, not just consumers.

MozFest

Ambitious goals for MozFest 2014

MozFest brings together a passionate, global cohort to establish the open values that will govern the web of the future. Our aim this year is to develop tools and practices to keep the democratic principles of the Internet alive. We’ll be strategizing how to use both distributed organizing and skill-sharing to engage the global open web community. Web literacy – the critical skills necessary to read, write and participate on the the Internet – are central to this mission. We’ll address the challenges facing the Internet  and explore how to spread web literacy on a global scale through hands-on, interactive sessions organized into 11 themed tracks.

MozFest

Inspiring keynote speakers

While the motto of MozFest is Less Yack, More Hack, participants will be treated to some engaging keynote speakers including Baroness Beeban Kidron, Mary Moloney from CoderDojo, Mark Surman, Executive Director of the Mozilla Foundation, and Mitchell Baker, Executive Chairwoman of Mozilla.

Dive in

MozFest is our biggest party of the year. If you’re celebrating with us in London, we invite you to dive in, meet some kindred spirits and start hacking. If you’re interested in joining the festivities from afar, check out these great options for remote participation.

Get Involved:

 

 

Air MozillaVirtual Reality & The Web: Next Steps

Virtual Reality & The Web: Next Steps It is the early days of the VR web. We flipped the switch in June with experimental VR-enabled builds of Firefox, and we've been busy...

Air MozillaParis Meetup pour la décentralisation d'Internet #3

Paris Meetup pour la décentralisation d'Internet #3 Third event of a series on how to decentralize the Web; this time Framasoft and Caliopen lead the dance and tell us more about their...

hacks.mozilla.orgSVG & colors in OpenType fonts

Sample of a colorfont

Prolog

Until recently having more than one color in a glyph of a vector font was technically not possible. Getting a polychrome letter required multiplying the content for every color. Like it happened with many other techniques before, it took some time for digital type to overcome the constraints of the old technique. When printing with wood or lead type the limitation to one color per glyph is inherent (if you don’t count random gradients). More than one color per letter required separate fonts for the differently colored parts and a new print run for every color. This has been done beautifully and pictures of some magnificent examples are available online. Using overprinting the impression of three colors can be achieved with just two colors.

Overprinting colors
Simulation of two overprinting colors resulting in a third.

Digital font formats kept the limitation to one ‘surface’ per glyph. There can be several outlines in a glyph but when the font is used to set type the assigned color applies to all outlines. Analog to letterpress the content needs to be doubled and superimposed to have more than one color per glyph. Multiplying does not sound like an elegant solution and it is a constant source of errors.

It took some emojis until the demand for multi-colored fonts was big enough to develop additional tables to store this information within OpenType fonts. As of this writing there are several different ways to implement this. Adam Twardoch compares all proposed solutions in great detail on the FontLab blog.

To me the Adobe/Mozilla way looks the most intriguing.

Upon its proposal it was discussed by a W3C community group and published as a stable document. The basic idea is to store the colored glyphs as svgs in the OpenType font. Of course this depends on the complexity of your typeface but svgs should usually result in a smaller file size than pngs. With the development of high resolution screens vectors also seem to be a better solution than pixels. The possibility to animate the svgs is an interesting addition and will surely be used in interesting (and very annoying) ways. BLING BLING.

Technique

I am not a font technician or a web developer just very curious about this new developments. There might be other ways but this is how I managed to build colorful OpenType fonts.

In order to make your own you will need a font editor. There are several options like RoboFont and Glyphs (both Mac only), FontLab and the free FontForge. RoboFont is the editor of my choice, since it is highly customizable and you can build your own extensions with python. In a new font I added as many new layers as the amount of colors I wanted to have in the final font. Either draw in the separate layers right away or just copy the outlines into the respective layer after you’ve drawn them in the foreground layer. With the very handy Layer Preview extension you can preview all Layers overlapping. You can also just increase the size of the thumbnails in the font window. At some point they will show all layers. Adjust the colors to your liking in the Inspector since they are used for the preview.

RoboFont Inspector
Define the colors you want to see in the Layer Preview
A separated letter
Layer preview
The outlines of the separate layers and their combination

When you are done drawing your outlines you will need to safe a ufo for every layer / color. I used a little python script to safe them in the same place as the main file:

f = CurrentFont()
path = f.path
 
for layer in f.layerOrder:
newFont = RFont()
 
for g in f:
    orig = g.getLayer(layer)
    newFont.newGlyph(g.name)
    newFont[g.name].appendGlyph(orig)
    newFont[g.name].width = orig.width
    newFont[g.name].update()
 
newFont.info.familyName = f.info.familyName
newFont.info.styleName = layer
newFont.save(destDir = path[:-4] +"_%s" % layer +".ufo")
newFont.close()
 
print "Done Splitting"

Once I had all my separate ufos I loaded them into TransType from FontLab. Just drop your ufos in the main window and select the ones you want to combine. In the Effect menu click ‘Overlay Fonts …’. You get a preview window where you can assign a rgba value for each ufo and then hit OK. Select the newly added font in the collection and export it as OpenType (ttf). You will get a folder with all colorfont versions.

TransType
The preview of your colorfont in TransType.

RoboChrome

In case you don’t want to use TransType you might have a look at the very powerful RoboFont extension by Jens Kutílek called RoboChrome. You will need a separate version of your base-glyph for every color, which can also be done with a scipt if you have all of your outlines in layers.

f = CurrentFont()
selection = f.selection
 
for l, layer in enumerate(f.layerOrder):
for g in selection:
    char = f[g]
    name = g + ".layer%d" % l
    f.newGlyph(name)
    f[name].width = f[g].width
    l_glyph = f[g].getLayer(layer)
    f[name].appendGlyph(l_glyph)
    f[name].mark = (.2, .2, .2, .2)
 
print "Done with the Devision"

For RoboChrome you will need to split your glyph into several.

Fonttools

You can also modify the svg table of a compiled font or insert your own if it does not have any yet. To do so I used the very helpful fonttools by Just van Rossum. Just generate a otf or ttf with the font editor of your choice. Open the Terminal and type ttx if you are on Mac OS and have fonttools installed. Drop the font file in the Terminal window and hit return. Fonttools will convert your font into an xml (YourFontName.ttx) in the same folder. This file can then be opened, modified and recompiled into a otf or ttf.

This can be quite helpful to streamline the svg compiled by a program and therefore reduce the file size. I rewrote the svg of a 1.6mb font to get it down to 980kb. Using it as a webfont that makes quite a difference. If you want to add your own svg table and font that does not have any yet you might read a bit about the required header information. The endGlyphID and startGlyphID for the glyph you want to supply with svg data can be found in the <GlyphOrder> Table.

<SVG>
<svgDoc endGlyphID="18" startGlyphID="18">
    <![CDATA[
    <!-- here goes your svg -->
    ]]>
</svgDoc>
<svgDoc endGlyphID="19" startGlyphID="19">...</svgDoc>
<svgDoc endGlyphID="20" startGlyphID="20">...</svgDoc>
...
<colorPalettes></colorPalettes>
</SVG>

One thing to keep in mind is the two different coordinate systems. Contrary to a digital font svg has a y-down axis. So you either have to draw in the negative space or you draw reversed and then mirror everything with:

transform="scale(1,-1)"
Y-axis comparison
While typefaces usually have a y-up axis SVG uses y-down.

Animation

Now if you really want to pimp your fonts you should add some unnecessary animation to annoy everybody. Just insert it between the opening and closing tags of whatever you want to modify. Here is an example of a circle changing its fill-opacity from zero to 100% over a duration of 500ms in a loop.

<circle>
<animate    attributeName="fill-opacity" 
            begin="0" 
            dur="500ms" 
            from="0" 
            to="1" 
            repeatCount="indefinite"/>
</circle>

Implementation

Technically these fonts should work in any application that works with otfs or ttfs. But as of this writing only Firefox shows the svg. If the rendering is not supported the application will just use the regular glyph outlines as a fallback. So if you have your font(s) ready it’s time to write some css and html to test and display them on a website.

The @font-face

@font-face {
font-family: "Colors-Yes"; /* reference name */
src: url('./fonts/Name_of_your_font.ttf');
font-weight: 400; /* or whatever applies */
font-style: normal; /* or whatever applies */
text-rendering: optimizeLegibility; /* maybe */
}

The basic css

.color_font { font-family: "Colors-Yes"; }

The HTML

<p class="color_font">Shiny polychromatic text</p>

Restrictions

As of this writing (October 2014) the format is supported by Firefox (26+) only. Since this was initiated by Adobe and Mozilla there might be a broader support in the future.

While using svg has the advantage of reasonably small files and the content does not have to be multiplied it brings one major drawback. Since the colors are ‘hard-coded’ into the font there is no possibility to access them with css. Hopefully this might change with the implementation of a <COLR/CPAL> table.

There is a bug that keeps animations from being played in Firefox 32. While animations are rendered in the current version (33) this might change for obvious reasons.

Depending how you establish your svg table it might blow up and result in fairly big files. Be aware of that in case you use them to render the most crucial content of your websites.

Examples

Links, Credits & Thanks

Thanks Erik, Frederik, Just and Tal for making great tools!

Software CarpentryA New Lesson Template, Version 2

Update: this post now includes feedback from participants in the instructor training session run at TGAC on Oct 22-23, 2014. Please see the bottom of this page for their comments.

Thanks to everyone for their feedback on the first draft of our new template for lessons. The major suggestions were:

  1. We need to explain how this template supports student experience, quality lesson planning, etc. It's not clear now how compliance with these Markdown formatting rules will help improve teaching and learning.
  2. The template needs to be much simpler. As Andromeda Yelton said, "It just looks to me like a minefield of ways to get things wrong—things that have nothing to do with pedagogy..."
  3. There needs to be a validator that authors can run locally before submitting changes or new lessons. (The proposal did mention that make check would run bin/check.py, but this point needs to be more prominent.
  4. Every topic should have at least one challenge, and challenges should be explicitly connected to specific learning objectives.
  5. We need a clearer explanation of the difference between the reference guide (which is meant to be a cheat sheet for learners to take away) and the instructor's guide (which is meant to be tips and tricks for teaching). We should also suggest formatting rules for both.
  6. The instructor's guide should explicitly present each lesson's "legend", i.e., the story that ties it together which instructors gradually reveal to learners.
  7. We need to decide whether the instructor's guide is a separate document, or whether there are call-out sections in each topic for instructors. The former puts the whole story in one place, and helps updaters to see the whole thing when making changes; the latter puts it in context, and helps updaters check that the instructor's material is consistent with the lesson material.
  8. Every topic should explicitly list the time required to teach it. (We should do this for topics, rather than for whole lessons, because people often don't get through all of the latter, which makes timing reports difficult to interpret.)
  9. We need to make it clear that lessons must be CC-BY licensed to permit remixing.

With all that in mind, here's another cut at the template—as before, we'd be grateful for comments. Note that this post mingles description of "what" with explanation of "why"; the final guide for people building lessons will disentangle them to make the former easier to follow.

Note also that Trevor Bekolay has drafted an implementation at https://github.com/tbekolay/swc-template for people who'd like to see what the template would look like in practice. There's still work to do (see below), but it's a great start. Thanks to Trevor, the other Trevor, Erik with a 'k', Molly, Emily, Karin, Rémi, and Andromeda for their feedback.

To Do

  • Some people suggested getting rid of the web/ folder and have lessons load CSS, Javascript, and standard images from the web. This would reduce the size of the repository, and help ensure consistency, but (a) a lot of people write when they're offline (I'm doing it right now), and (b) people may not want their lessons' appearance to change outwith their control.
  • We need to figure out how example programs will reference data files (i.e., what paths they will use). See the notes under "Software, Data, and Images" below for full discussion.
  • Trevor Bekolay writes:
    I took a stab at implementing a minimal motivation slides. Unfortunately this isn't very clean right now; I just included the <section> and <script> tags in the Markdown, which I know we want to avoid. I initially had the slides in a separate Markdown file, which is possible with reveal.js. There are a few weird things with this though, which we may or may not be able to fix, since we're limited in what we can do with Jekyll. Briefly, we can have the slides.html layout do something like this:
    <div class="slides"><section data-markdown="blog/2014/10/new-lesson-template-v2.html" data-separator="^\n\n\n" data-vertical="^\n\n"></section></div>
    The only wart with this is that the Markdown file (i.e., page.path) doesn't get copied to _site. I couldn't figure out a way to do it using vanilla Jekyll, but it might be possible. Even if it does get copied, however, we might have to strip out the YAML header.

Terms

  • A lesson is a complete story about some subject, typically taught in 2-4 hours.
  • A topic is a single scene in that story, typically 5-15 minutes long.
  • A slug is a short identifier for something, such as filesys (for "file system").

Design Choices

  • We define everything in terms of Markdown. If lesson authors want to use something else for their lessons (e.g., IPython Notebooks), it's up to them to generate and commit Markdown formatted according to the rules below.
  • We avoid putting HTML inside Markdown: it's ugly to read and write, and error-prone to process. Instead, we put things that ought to be in <div> blocks, like the learning objectives and challenge exercises, in blocks indented with >, and do a bit of post-processing to attach the right CSS classes to these blocks.
  • Whatever Markdown-to-HTML converter we use must support {.attribute} syntax for specifying anchors and classes rather than the clunky HTML-in-Markdown syntax our current notes have to use to be compatible with Jekyll.
  • Any "extra" metadata (e.g., the human language of the lesson) will go into the YAML header of the lesson's index page rather than into a separate configuration file.

Justification and Tutorial

The main Software Carpentry website will contain a one-page tutorial explaining (a) how to create and update lessons and (b) how the various parts of the template support better teaching. A sketch of the second of these is:

  • A standard layout so that:
    1. Lessons have the same look and feel, and can be navigated in predictable ways, even when they are written by different (and multiple) people.
    2. Contributors know where to put things when they are extending or modifying lessons.
    3. Content can more easily be checked. For example, we want to make sure that every learning objective is matched by a challenge, and that every challenge corresponds to one or more learning objectives.
    In the longer term, a standard format will help us build tools, but the format must be justifiable in terms of short-term gains for instructors and learners.
  • One short page per topic: to show each learning sprint explicitly, and to create small chunks for recording timings. The cycle we expect is:
    1. Explain the topic's objectives.
    2. Teach it.
    3. Do one or more challenges (depending on time).
  • Introductory slides: to give learners a sense of where the next couple or three hours are going to take them.
  • Reference guide: because everybody wants a cheat sheet. This includes a glossary of terms to help lesson authors think through what they expect learners to be unfamiliar with, and to make searching through lessons easier.
  • Instructor's guide: our collected wisdom, and solutions to the challenge exercises. Once lessons have been reformatted, we will ask everyone who teaches for us to review and update the instructor's guide for each lesson they taught after each workshop. Note that the instructor's guide (including challenge solutions) will be on the web, both because we believe in openness, and because it's going to be publicly readable anyway.
  • Tools: because machines should check formatting rules, not people.

Overall Layout

Each lesson is stored in a directory laid out as described below. That directory is a self-contained Git repository (i.e., there are no submodules or clever tricks with symbolic links).

  1. index.md: the home page for the lesson. (See "Home Page" below.)
  2. dd-slug.md: the topics in the lesson. dd is a sequence number such as 01, 02, etc., and slug is an abbreviated single-word mnemonic for the topic. Thus, 03-filesys.md is the third topic in this lesson, and is about the filesystem. (Note that we use hyphens rather than underscores in filenames.) See "Topics" below.
  3. motivation.md: slides for a short introductory presentation (three minutes or less) explaining what the lesson is about and why people would want to learn it. See "Introductory Slides" below.
  4. reference.md: a cheat sheet summarizing key terms and commands, syntax, etc., that can be printed and given to learners. See "Reference Guide" below.
  5. instructors.md: the instructor's guide for the lesson. See "Instructor's Guide" below.
  6. code/: a sub-directory containing all code samples. See "Software, Data, and Images" below.
  7. data/: a sub-directory containing all data files for this lesson. See "Software, Data, and Images" below.
  8. img/: images (including plots) used in the lesson. See "Software, Data, and Images" below.
  9. tools/: tools for managing lessons. See "Tools" below.
  10. _layouts/: page layout templates. See "Layout" below.
  11. _includes/: page inclusions. See "Layout" below.

Home Page

index.md must be structured as follows:

---
layout: lesson
title: Lesson Title
keywords: ["some", "key terms", "in a list"]
---
Paragraph of introductory material.

> ## Prerequisites
>
> A short paragraph describing what learners need to know
> before tackling this lesson.

## Topics

* [Topic Title 1](01-slug.html)
* [Topic Title 2](02-slug.html)

## Other Resources

* [Introduction](intro.html)
* [Reference Guide](reference.html)
* [Instructor's Guide](guide.html)

Notes:

  • The description of prerequisites is prose for human consumption, not a machine-comprehensible list of dependencies. We may supplement the former with the latter once we have more experience with this lesson format and know what we actually want to do. The block must be titled "Prerequisites" so we can detect it and style it properly.
  • Software installation and configuration instructions aren't in the lesson, since they may be shared with other lessons. They will be stored centrally on the Software Carpentry web site and linked from the lessons that need them.

Topics

Each topic must be structured as follows:

---
layout: topic
title: Topic Title
minutes: MM
---
> ## Learning Objectives {.objectives}
>
> * Learning objective 1
> * Learning objective 2

Paragraphs of text mixed with:

~~~ {.python}
some code:
    to be displayed
~~~
~~~ {.output}
output
from
program
~~~
~~~ {.error}
error reports from program (if any)
~~~

and possibly including:

> ## Callout Box {.callout}
>
> An aside of some kind.

> ## Challenge Title {.challenge}
>
> Description of a single challenge.
> There may be several challenges.

Notes:

  1. The "expected time" heading is called minutes to encourage people to create topics that are short (10-15 minutes at most).
  2. There are no sub-headings inside a topic other than the ones shown: if a topic needs sub-headings, it should be broken into two or more topics.
  3. We need to figure out how to connect challenges back to learning objectives. Markdown doesn't appear to allow us to add id attributes to list elements, or to create anchors that challenges can refer back to.

Introductory Slides

Every lesson must include a short slide deck suitable for a short presentation (3 minutes or less) that the instructor can use to explain to learners how knowing the subject will help them. Slides are written in Markdown, and compiled into HTML for use with reveal.js.

Notes:

  1. We should provide an example.

Reference Guide

The reference guide is a cheat sheet for learners to print, doodle on, and take away. The format of the actual guide is deliberately unconstrained for now, since we'll need to see a few before we can decide how they ought to be laid out (or whether they need to be laid out the same way at all).

The last thing in it must be a Level-2 heading called "Glossary" followed by definitions of key terms Each definition must be formatted as a separate blockquote indented with > signs:

---
layout: reference
---
...commands and examples...

## Glossary

> **Key Word 1**: the definition
> relevant to the lesson.

> **Key Word 2**: the definition
> relevant to the lesson.

Again, we use blockquotes because standard [sic] Markdown doesn't have a graceful syntax for <div> blocks. If definition lists become part of CommonMark, or if we standardize on Pandoc as our translation engine, we can use definition lists here instead of hacking around with blockquotes.

Instructor's Guide

Many learners will go through the lessons outside of class, so it seems best to keep material for instructors in a separate document, rather than interleaved in the lesson itself. Its structure is:

---
title: Instructor's Guide
---
## Overall

One or more paragraphs laying out the lesson's legend.

## General Points

* Point
* Point

## Topic 1

* Point
* Point

## Topic 2

* Point
* Point

Notes:

  1. The topic headings must match the topic titles. (Yes, we could define these as variables in a configuration file and refer to those variables everywhere, but in this case, repetition will be a lot easier to read, and our validator can check that the titles line up.)
  2. The points can be anything: specific ways to introduce ideas, common mistakes learners make and how to get out of them, or anything else.

Software, Data, and Images

All of the software samples used in the lesson must go in a directory called code/. Stand-alone data files must go in a directory called data/. Groups of related data files must be put together in a sub-directory of data/ with a meaningful (short) name.

Images used in the lessons must go in an img/ directory. We strongly prefer SVG for line drawings, since they are smaller, scale better, and are easier to edit. Screenshots and other raster images must be PNG or JPEG format.

Notes:

  1. This mirrors the layout a scientist would use for actual work (see Noble's "A Quick Guide to Organizing Computational Biology Projects" or Gentzkow and Shapiro's "Code and Data for the Social Sciences: A Practitioner's Guide").
  2. However, it may cause novice learners problems. If code/program.py includes a hard-wired path to a data file, that path must be either datafile.ext or data/datafile.ext. The first will only work if the program is run with the lesson's root directory as the current working directory, while the second will only work if the program is run from within the code/ directory. This is a learning opportunity for students working from the command line, but a confusing annoyance inside IDEs and the IPython Notebook (where the tool's current working directory is less obvious). And yes, the right answer is to pass filenames on the command line, but that requires learners to understand how to get command line arguments, which isn't something they'll be ready for in the first hour or two.
  3. We have removed the requirement for an index file in the code/ and data/ directories. It is tempting to require code fragments in topics to have an extra attribute src="code/filename.ext" so that we can prune files that are no longer used as lessons change, but that may be more effort than authors are willing to put in.

Tools

The tools/ directory contains tool to help create and maintain lessons:

  • tools/check: make sure that everything is formatted properly, and print error messages identifying problems if it's not.
  • tools/build: build the lesson website locally for previewing. This assumes tools/check has given the site a clean bill of health.
  • tools/update: run the right Git commands to update shared files (e.g., layout templates).

Layout

The template still contains _layouts/ and _includes/ directories for page layout templates and standard inclusions. These are needed to support lesson preview.

Major Changes

  • We no longer rely on Make. Instead, the two key tools are scripts in the tools/ directory.
  • There is no longer a separate glossary page. Instead, the glossary is part of the reference guide given to learners.
  • The index page no longer lists overall learning objectives, since learning objectives should all be paired with challenges.
  • Topic pages no longer have key points: anything that would have gone here properly belongs in the reference guide.

Feedback from TGAC Instructor Trainees

Participants in the instructor training session run at TGAC on Oct 22-23 gave us feedback on the content shown above. Their points are listed below; we'll try to factor them into the final template.

Good

  • Details +2
  • Lots of technical detail
  • Enables flexibility - adding contents
  • Markdown
  • Helps to structure / think about content
  • Good outline of what you want to do
  • Good organisation
  • Enough detail for somebody who doesn't have much experience
  • Uncomplicated visually
  • Required variables section
  • Proper highlighting for the syntax part
  • Clearly listed variables

Bad

  • Assumed knowledge (keywords) +2
  • Not much introduction +2
  • Overwhelming
  • Some terms jargon unclear
  • Not live yet so you can't check if works
  • Mixed instructions (website + Jekyll info)
  • Text on the lesson template needs reordering (restucturing)
  • See Markdown rendered so that it's easier to review
  • Key info down at the bottom
  • More visual info
  • No "Get it touch" info
  • Customizing lessons badly explained
  • Which md and translators to use
  • Colours (background + foreground)
  • example.edu for email

Questions/Suggestions

  • Maybe two different overviews (depending on the audience) +2
  • Why these engineering choices were made? (if that was supposed to be simple)
  • Troubleshooting?
  • Shortcut to the "how to set it up and skip the whole info"?
  • How is feedback to lessons made available to others?
  • Metasection on each lesson - which audience it is particularly working well with?
  • Why should we create our own website?

The Mozilla BlogIntroducing the 2015 Knight-Mozilla Fellows

The Knight-Mozilla Fellowships bring together developers, technologists, civic hackers, and data crunchers to spend 10 months working on open source code with partner newsrooms around the world. The Fellowships are part of the Knight-Mozilla OpenNews project, supported by the John … Continue reading

Air MozillaProduct Coordination Meeting

Product Coordination Meeting Weekly coordination meeting for Firefox Desktop & Android product planning between Marketing/PR, Engineering, Release Scheduling, and Support.

Air MozillaBugzilla Development Meeting

Bugzilla Development Meeting Help define, plan, design, and implement Bugzilla's future!

hacks.mozilla.orgThe Visibility Monitor supported by Gaia

With the booming ultra-low-price device demands, we have to more carefully calculate about each resource of the device, such as CPU, RAM, and Flash. Here I want to introduce the Visibility Monitor which has existed for a long time in Gaia.

Origin

The Visibility Monitor originated from the Gallery app of Gaia and appeared in Bug 809782 (gallery crashes if too many images are available on sdcard) for the first time. It solves the problem of the memory shortage which is caused by storing too many images in the Gallery app. After a period of time, Tag Visibility Monitor, the “brother” of Visibility Monitor, was born. Both of their functionalities are almost the same, except that Tag Visibility Monitor follows pre-assigned tag names to filter elements which need to be monitored. So, we are going to use the Tag Visibility Monitor as the example in the following sections. Of course, the Visibility Monitor is also applicable.

For your information, the Visibility Monitor was done by JavaScript master David Flanagan. He is also the author of JavaScript: The Definitive Guide and works at Mozilla.

Working Principle

Basically, the Visibility Monitor removes the images that are outside of the visible screen from the DOM tree, so Gecko has the chance to release the image memory which is temporarily used by the image loader/decoder.

You may ask: “The operation can be done on Gecko. Why do this on Gaia?” In fact, Gecko enables the Visibility Monitor by default; however, the Visibility Monitor only removes the images which are image buffers (the uncompressed ones by the image decoder). However, the original images are still temporarily stored in the memory. These images were captured by the image loader from the Internet or the local file system. However, the Visibility Monitor supported by Gaia will completely remove images from the DOM tree, even the original ones which are temporarily stored in the image loader as well. This feature is extremely important for the Tarako, the codename of the Firefox OS low-end device project, which only equips 128MB memory.


To take the graphic above as the example, we can separate the whole image as:

  • display port
  • pre-rendered area
  • margin
  • all other area

When the display port is moving up and down, the Visibility Monitor should dynamically load the pre-rendered area. At the same time, the image outside of the pre-rendered area will not be loaded or uncompressed. The Visibility Monitor will take the margin area as a dynamically adjustable parameter.

  • The higher the margin value is, the bigger the part of the image Gecko has to pre-render, which will lead to more memory usage and to scroll more smoothly (FPS will be higher)
  • vice versa: the lower the margin is, the smaller the part of the image Gecko has to pre-render, which will lead to less memory usage and to scroll less smoothly (FPS will be lower).

Because of this working principle, we can adjust the parameters and image quality to match our demands.

Prerequisites

It’s impossible to “have your cake and eat it too”. Just like it’s impossible to “use the Visibility Monitor and be out of its influence.” The prerequisites to use the Visibility Monitor) are listed below:

The monitored HTML DOM Elements are arranged from top to bottom

The original layout of Web is from top to bottom, but we may change the layout from bottom to top with some CSS options, such as flex-flow. After applying them, the Visibility Monitor may become more complex and make the FPS lower (we do not like the result), and this kind of layout is not acceptable for the Visibility Monitor. When someone uses this layout, the Visibility Monitor shows nothing at the areas where it should display images and sends errors instead.

The monitored HTML DOM Elements cannot be absolutely positioned

The Visibility Monitor calculates the height of each HTML DOM Elements to decide whether to display the element or not. So, when the element is fixed at a certain location, the calculation becomes more complex, which is unacceptable. When someone uses this kind of arrangement, the Visibility Monitor shows nothing at the area where it should display images and sends error message.

The monitored HTML DOM Elements should not dynamically change their position through JavaScript

Similar to absolute location, dynamically changing HTML DOM Elements’ locations make calculations more complex, both of them are unacceptable. When someone uses this kind of arrangement, the Visibility Monitor shows nothing at the area.

The monitored HTML DOM Elements cannot be resized or be hidden, but they can have different sizes

The Visibility Monitor uses MutationObserver to monitor adding and removal operations of HTML DOM Elements, but not appearing, disappearing or resizing of an HTML DOM Element. When someone uses this kind of arrangement, the Visibility Monitor again shows nothing.

The container which runs monitoring cannot use position: static

Because the Visibility Monitor uses offsetTop to calculate the location of display port, it cannot use position: static. We recommend to use position: relative instead.

The container which runs monitoring can only be resized by the resizing window

The Visibility Monitor uses the window.onresize event to decide whether to re-calculate the pre-rendered area or not. So each change of the size should send a resize event.

Tag Visibility Monitor API

The Visibility Monitor API is very simple and has only one function:

function monitorTagVisibility(
    container,
    tag,
    scrollMargin,
    scrollDelta,
    onscreenCallback,
    offscreenCallback
)

The parameters it accepts are defined as follows:

  1. container: a real HTML DOM Element for users to scroll. It doesn’t necessarily have be the direct parent of the monitored elements, but it has to be one of their ancestors
  2. tag: a string to represent the element name which is going to be monitored
  3. scrollMargin: a number to define the margin size out of the display port
  4. scrollDelta: a number to define “how many pixels have been scrolled should that shoukd have a calculation to produce a new pre-rendered area”
  5. onscreenCallback: a callback function that will be called after a HTML DOM Element moved into the pre-rendered area
  6. offscreenCallback: a callback function that will be called after a HTML DOM Element moved out of the pre-rendered area

Note: the “move into” and “move out” mentioned above means: as soon as only one pixel is in the pre-rendered area, we say it moves into or remains on the screen; as soon as none of the pixels are in the pre-rendered area, we say it moves out of or does not exist on the screen.

Example: Music App (1.3T branch)

One of my tasks is to the add the Visibility Monitor into the 1.3T Music app. Because lack of understanding for the structure of the Music app, I asked help from another colleague to find where I should add it in, which were in three locations:

  • TilesView
  • ListView
  • SearchView

Here we only take TilesView as the example and demonstrate the way of adding it. First, we use the App Manager to find out the real HTML DOM Element in TilesView for scrolling:


With the App Manager, we find that TilesView has views-tile, views-tiles-search, views-tiles-anchor, and li.tile (which is under all three of them). After the test, we can see that the scroll bar shows at views-tile; views-tiles-search will then automatically be scrolled to the invisible location. Then each tile exists in the way of li.tile. Therefore, we should set the container as views-tiles and set tag as li. The following code was used to call the Visibility Monitor:

monitorTagVisibility(
    document.getElementById('views-tile'), 
    'li', 
    visibilityMargin,    // extra space top and bottom
    minimumScrollDelta,  // min scroll before we do work
    thumbnailOnscreen,   // set background image
    thumbnailOffscreen // remove background image
);

In the code above, visibilityMargin is set as 360, which means 3/4 of the screen. minimumScrollDelta is set as 1, which means each pixel will be recalculated once. thumbnailOnScreen and thumbnailOffscreen can be used to set the background image of the thumbnail or clean it up.

The Effect

We performed practical tests on the Tarako device. We launched the Music app and made it load nearly 200 MP3 files with cover images, which were totally about 900MB. Without the Visibility Monitor, the memory usage of the Music app for images were as follows:

├──23.48 MB (41.04%) -- images
 
│ ├──23.48 MB (41.04%) -- content
 
│   │   ├──23.48 MB (41.04%) -- used
 
│   │   │ ├──17.27 MB (30.18%) ── uncompressed-nonheap
 
│   │   │ ├───6.10 MB (10.66%) ── raw
 
│   │   │ └───0.12 MB (00.20%) ── uncompressed-heap
 
│   │   └───0.00 MB (00.00%) ++ unused
 
│   └───0.00 MB (00.00%) ++ chrome

With the Visibility Monitor, we re-gained the memory usage as follows:

├───6.75 MB (16.60%) -- images
 
│   ├──6.75 MB (16.60%) -- content
 
│   │  ├──5.77 MB (14.19%) -- used
 
│   │  │  ├──3.77 MB (09.26%) ── uncompressed-nonheap
 
│   │  │  ├──1.87 MB (04.59%) ── raw
 
│   │  │  └──0.14 MB (00.34%) ── uncompressed-heap
 
│   │  └──0.98 MB (02.41%) ++ unused
 
│   └──0.00 MB (00.00%) ++ chrome

To compare both of them:

├──-16.73 MB (101.12%) -- images/content
 
│  ├──-17.71 MB (107.05%) -- used
 
│  │  ├──-13.50 MB (81.60%) ── uncompressed-nonheap
 
│  │  ├───-4.23 MB (25.58%) ── raw
 
│  │  └────0.02 MB (-0.13%) ── uncompressed-heap
 
│  └────0.98 MB (-5.93%) ── unused/raw

To make sure the Visibility Monitor works properly, we added more MP3 files which reached about 400 files in total. At the same time, the usage of memory maintained around 7MB. It’s really a great progress for the 128MB device.

Conclusion

Honestly, we don’t have to use the Visibility Monitor if there weren’t so many images. Because the Visibility Monitor always influences FPS, we can have Gecko deal with the situation. When talking about apps which use lots of images, we can control memory resources through the Visibility Monitor. Even if we increase the amount of images, the memory usage still keeps stable.

The margin and delta parameters of the Visibility Monitor will affect the FPS and memory usage, which can be concluded as follows:

  • the value of higher marginvalue: more memory usage, FPS will be closer to Gecko native scrolling
  • the value of lower margin: less memory usage, lower FPS
  • The value of higher delta: memory usage increases slightly, higher FPS, higher possibility to see unloaded images
  • the value of lower delta: memory usage decreases slightly, lower FPS, lower possibility to see unloaded images

WebmakerMozFest 2014: Spotlight on “Community Building”

This is the ninth post in a series featuring interviews with the 2014 Mozilla Festival “Space Wranglers,” the curators of the many exciting programmatic tracks slated for this year’s Festival.

For this edition, we chatted with Beatrice Martini and Bekka Kahn who are co-wrangling the Community Building track at MozFest—a track all about being members, builders and fuel of communities joining their forces as part of the Open Web movement.

What excites you most about your track?

In the early days of the web, Mozilla pioneered community building efforts together with other open source projects. Today, the best practices have changed and there are many organisations to learn from. Our track aims to convene these practitioners and join forces to create a future action roadmap for the Open Web movement.

Building and mobilising community action requires expertise and understanding of both tools and crowd. The relationships between stakeholders need to be planned with inclusivity and sustainability in mind.

Our track has the ambitious aim to tell the story about this powerful and groundbreaking system. We hope to create the space where both newcomers and experienced community members can meet, share knowledge, learn from each other, get inspired and leave the festival feeling empowered and equipped with a plan for their next action.

The track will feature participatory sessions (there’s no projector is sight!), an ongoing wall-space action and a handbook writing sprint. In addition to this, participants and passers-by will be encouraged to answer the question: “What’s the next action, of any kind/ size/ location, you plan to take for the Open Web movement?”

Who are you working with to make this track happen?

We’ve been very excited to have the opportunity to collaborate with many great folks, old friends and new, to build such an exciting project. The track was added to just a few weeks before the event, so it’s very emergent—just the way we like it!

We believe that collaboration between communities is what can really fuel the future of the Open Web movement. We put this belief into practice through our curatorship structure, as well as the planning of the track’s programme, which is a combination of great ideas that were sent through the festival’s Call for Proposals and invitations we made to folks we knew would have had the ability to blow people’s mind with 60 minutes and a box of paper and markers at their disposal.

How can someone who isn’t able to attend MozFest learn more or get involved in this topic?

Anyone will be welcome to connect with us in (at least) three ways.

  1. We’ll have a dedicated hashtag to keep all online/remote Community conversations going: follow and engage with #MozFestCB on your social media plaftorm of choice, we’ll record a curated version of the feed on our Storify.
  2. We’ll also collect all notes, resources of documentation of anything that will happen in and around the track on our online home.
  3. The work to create a much awaited Community Building Handbook will be kicked off at MozFest and anyone who thinks could enrich it with useful learnings is invited to join the writing effort, from anywhere in the world.

 

Air MozillaBlowing up the Atomic Barrier: Atomics in C and C++

Blowing up the Atomic Barrier: Atomics in C and C++ Robin from Google will discuss Atomics in C and C++, and specifically their use in LLVM.

WebmakerMozFest 2014 Keynote Speakers

MozFest logo copy

We’re excited to welcome a slate of thought-provoking keynote speakers who will discuss the state of the web today, why an open web matters more than ever, and how you can get involved in building the web of the future.

Beeban Kidron
Film Director & Co-Founder, FILMCLUB

Beeban

The Baroness Beeban Kidron has been directing films for more than 30 years and is a joint founder of FILMCLUB, a educational charity that allows children to watch and analyze internationally iconic films. Each week the charity reaches 220,000  children, in more than 7,000 clubs.

Kidron is  best known for directing Bridget Jones: The Edge of Reason and  the Bafta-winning miniseries Oranges Are Not the Only Fruit. She also directed To Wong Foo Thanks for Everything, Julie Newmar, Antonia and Jane, as well as two documentaries on  prostitution: Hookers, Hustlers, Pimps and their Johns, and Sex, Death and the Gods, a film about “devadasi,” or Indian “sacred prostitutes.”

Her latest film, InRealLife, explores the first generation of British teenagers who are  growing up having never known a time before smartphones and social  media, whose childhoods are defined by status updates, emails and digitized friendships.

Mary Moloney
Global CEO, CoderDojo

mary-maloney1

Mary joined the CoderDojo Foundation team in June 2014, to take up the position of Global CEO. Prior to that, she was a partner in Accenture’s strategy practice, leading engagements with international clients in the Media, High Tech, Telco & Financial Services sectors. During her 23 years with Accenture Mary held a number of lead positions within the organization & within its clients, including; Partner, Managing Director and Multiple C-Suite positions. She has also been involved at board level with number of non profit organizations and remains on the boards of the Dublin Fringe Festival and the Professional Women’s Network. Both of her 9 year old and 7 year old sons are active ninjas who participate at the Science Gallery and Sandymount Dojos near where she lives in Dublin.
@marydunph

Mark Surman
Executive Director, Mozilla Foundation

mark-surman

A community activist and technology executive of 20+ years, Mark  currently serves as the Executive Director of the Mozilla Foundation, makers of Firefox and one of the largest social enterprises in the  world. At Mozilla, he is focused on using the open technology and ethos of the web to transform fields such as education, journalism and filmmaking. Mark has overseen the development of Popcorn.js, which Wired  has called the future of online video; the Open Badges initiative,  launched by the US Secretary of Education; and the Knight Mozilla News  Technology partnership, which seeks to reinvent the future of digital  journalism.

Prior to joining Mozilla, Mark was awarded one of the first Shuttleworth  Foundation Fellowships, where he explored the application of open  principles to philanthropy. During his fellowship, he advised a Harvard  Berkman study on open source licensing in foundations, was the lead  author on the Cape Town Open Education Declaration, and organized the  first open education track at the iCommons Summit, which led to him  becoming a founding board member of Peer-to-peer University (P2PU). Mark holds a BA in the History of Community Media from the  University of Toronto.
@msurman

Mitchell Baker
Executive Chairwoman, Mozilla

Mitchell-Baker_Mozilla-Final_9_24_131-160x160

As the leader of the Mozilla Project, Mitchell Baker is responsible for organizing and motivating a massive, worldwide, collective of employees and volunteers who are breathing new life into the Internet with the Firefox Web browser, Firefox OS and other Mozilla products.

Mitchell was born and raised in Berkeley, California, receiving her BA in Asian Studies from UC Berkeley and her JD from the Boalt Hall School of Law. Mitchell has been the general manager of the Mozilla project since 1999. She served as CEO of Mozilla until January 2008, when the organization’s rapid growth encouraged her to split her responsibilities and add a CEO. Mitchell remains deeply engaged in developing product offerings that promote the mission of empowering individuals. She also guides the overall scope and direction of Mozilla’s mission.
@MitchellBaker

Get Involved:

 

hacks.mozilla.orgNew on MDN: Sign in with Github!

MDN now gives users more options for signing in!

Sign in with GitHub

Signing in to MDN previously required a Mozilla Persona account. Getting a Persona account is free and easy, but MDN analytics showed a steep drop-off at the “Sign in with Persona” interface. For example, almost 90% of signed-out users who clicked “Edit” never signed in, which means they never got to edit. That’s a lot of missed opportunities!

It should be easy to join and edit MDN. If you click “Edit,” we should make it easy for you to edit. Our analysis demonstrated that most potential editors stumbled at the Persona sign in. So, we looked for ways to improve sign in for potential contributors.

Common sense suggests that many developers have a GitHub account, and analysis confirms it. Of the MDN users who list external accounts in their profiles, approximately 30% include a GitHub account. GitHub is the 2nd-most common external account listed, after Twitter.

That got us thinking: If we integrated GitHub accounts with MDN profiles, we could one day share interesting GitHub activity with each other on MDN. We could one day use some of GitHub’s tools to create even more value for MDN users. Most immediately, we could offer “sign in with GitHub” to at least 30% (but probably more) of MDN’s existing users.

And if we did that, we could also offer “sign in with GitHub” to over 3 million GitHub users.

The entire engineering team and MDN community helped make it happen.

Authentication Library

Adding the ability to authenticate using GitHub accounts required us to extend the way MDN handles authentication so that MDN users can start to add their GitHub accounts without effort. We reviewed the current code of kuma (the code base that runs MDN) and realized that it was deeply integrated with how Mozilla Persona works technically.

As we’re constantly trying to remove technical debt that meant revisiting some of the decisions we’ve made years ago when the code responsible for authentication was written. After a review process we decided to replace our home-grown system, django-browserid, with a 3rd party library called django-allauth as it is a well known system in the Django community that is able to use multiple authentication providers side-by-side – Mozilla Persona and GitHub in our case.

One challenge was making sure that our existing user database could be ported over to the new system to reduce the negative impact on our users. To our surprise this was not a big problem and could be automated with a database migration–a special piece of code that would convert the data into the new format. We implemented the new authentication library and migrated accounts to it several months ago. MDN has been using django-allauth for Mozilla Persona authentication since then.

UX Challenges

We wanted our users to experience a fast and easy sign-up process with the goal of having them edit MDN content at the end. Some things we did in the interface to support this:

  • Remember why the user is signing up and return them to that task when sign up is complete.
  • Pre-fill the username and email address fields with data from GitHub (including pre-checking if they are available).
  • Trust GitHub as a source of confirmed email address so we do not have to confirm the email address before the user can complete signing up.
  • Standardise our language (this is harder than it sounds). Users on MDN “sign in” to their “MDN profile” by connecting “accounts” on other “services”. See the discussion.

One of our biggest UX challenges was allowing existing users to sign in with a new authentication provider. In this case, the user needs to “claim” an existing MDN profile after signing in with a new service, or needs to add a new sign-in service to their existing profile. We put a lot of work into making sure this was easy both from the user’s profile if they signed in with Persona first and from the sign-up flow if they signed in with GitHub first.

We started with an ideal plan for the UX but expected to make changes once we had a better understanding of what allauth and GitHub’s API are capable of. It was much easier to smooth the kinks out of the flow once we were able to click around and try it ourselves. This was facilitated by the way MDN uses feature toggles for testing.

Phased Testing & Release

This project could potentially corrupt profile or sign-in data, and changes one of our most essential interfaces – sign up and sign in. So, we made a careful release plan with several waves of functional testing.

We love to alpha- and beta-test changes on MDN with feature toggles. To toggle features we use the excellent django-waffle feature-flipper by James Socol – MDN Manager Emeritus.

We deployed the new code to our MDN development environment every day behind a feature toggle. During this time MDN engineers exercised the new features heavily, finding and filing bugs under our master tracking bug.

When the featureset was relatively complete, we created our beta test page, toggled the feature on our MDN staging environment for even more review. We did the end-to-end UX testing, invited internal Mozilla staff to help us beta test, filed a lot of UX bugs, and started to triage and prioritize launch blockers.

Next, we started an open beta by posting a site-wide banner on the live site, inviting anyone to test and file bugs. 365 beta testers participated in this round of QA. We also asked Mozilla WebQA to help deep-dive into the feature on our stage server. We only received a handful of bugs, which gave us great confidence about a final release.

Launch

It was a lot of work, but all the pieces finally came together and we launched. Because of our extensive testing & release plan, we’ve 0 incidents with the launch – no down-time, no stacktraces, no new bugs reported. We’re very excited to release this feature. We’re excited to give more options and features to our incredible MDN users and contributors, and we’re excited to invite each and every GitHub user to join the Mozilla Developer Network. Together we can make the web even more awesome. Sign in now.

Outlook

Now that we have worked out the infrastructure and UX challenges associated with multi-account authentication, we can look for other promising authentication services to integrate with. For example, Firefox Accounts (FxA) is the authentication service that powers Firefox Sync. FxA is integrated with Firefox and will soon be integrated with a variety of other Mozilla services. As more developers sign up for Firefox Accounts, we will look for opportunities to add it to our authentication options.

QMOFirefox OS QA Team Badge

b2g badgeThe Firefox OS QA team is happy to introduce its first ever badge. We will be giving out this badge to all active participants of our community events.

Get involved in our upcoming Bug Bash on October 23, 2014 and earn this badge by entering blocker bugs (i.e. bugs that block the smoketest from running).

[Ivana Catovic]We would like to thank Ivana Catovic, who spared time from her busy career as a graphic designer to create this badge for us. Ivana has created several badges for the Mozilla QA team in the past. Her talent and continued involvement in the Mozilla community are greatly appreciated.

WebmakerGet involved with Web Literacy Map v2.0!

TL;DR: Mozilla is working with the community to update the Web Literacy Map to v2.0. You can read more about the project below, or jump straight in and take the survey or join the community calls.

 
Mozilla Festival

Introduction

Mozilla defines web literacy as the skills and competencies needed for reading, writing and participating on the web. To chart these skills and competencies, we worked alongside a community of stakeholders in 2013 to create the Web Literacy Map. You can read more about why Mozilla cares about web literacy in this Webmaker Whitepaper.

The Web Literacy Map underpins the work we do with Webmaker and, in particular, the Webmaker resources section. As the web develops and evolves, we have committed to keeping the Web Literacy Map up-to-date. That’s why we’ve begun work on a version 2.0 of the Web Literacy Map.

To date, we’ve interviewed 38 stakeholders on what they believe the Web Literacy Map is doing well, and how it could be improved. We boiled down their feedback to 21 emerging themes for Web Literacy Map v2.0 and some ideas for how Webmaker could be improved.

 
Mozilla Festival London 2012

Community survey

From the 21 emerging themes mentioned above, we identified five proposals that would help shape further discussion about the Web Literacy Map. These are:

  1. I believe the Web Literacy Map should explicitly reference the Mozilla manifesto.
  2. I believe the three strands should be renamed ‘Reading’, ‘Writing’ and ‘Participating’.
  3. I believe the Web Literacy Map should look more like a ‘map’.
  4. I believe that concepts such as ‘Mobile’, ‘Identity’, and ‘Protecting’ should be represented as cross-cutting themes in the Web Literacy Map.
  5. I believe a ‘remix’ button should allow me to remix the Web Literacy Map for my community and context.

We’ve added these to a survey* which is available in the following languages:

The survey will close on November 1st. If you’d like to translate the survey into another language, please join one of the teams (or create your own!) on Transifex.

*Note: you can email your responses directly if you’d rather not sign into a Google account.

 
Mozfest_10Nov_055

Community calls

Today, we’re kicking off a series of seven Web Literacy Map v2.0 community calls. These will be at 3pm UTC:

There is a calendar that you can subscribe to here.

If you can’t make the calls, please do leave notes for discussion on the agenda for an upcoming call using the links above. Alternatively, get involved in the Web Literacy Map discussion area of the #TeachTheWeb forum.

 
Mozilla Maemo Danish Weekend 2009

Conclusion

We’re hoping to have the text of an updated Web Literacy Map finished by Q1 2015. The graphical elements and the reorganization of webmaker.org that it may entail will take longer. We’d be very interested in hearing how you plan to use it in your context.

You can keep up-to-date with everything to do with Web Literacy Map v2.0 by bookmarking this page on the Mozilla wiki.

Finally, there will be a few sessions at the Mozilla Festival next week about the Web Literacy Map. Look out for them, and get involved!


Images: mozillaeu, REV-, Paul Clarke, and William Quiviger

Software CarpentryPresenting the Novice R Materials and Future Plans for the SWC R Community

Approximately seven months after our initial meeting, the SWC R community has developed the first set of R lessons for use both in workshops and for self-directed learning from the SWC website. These novice R lessons are a translation of the current novice Python lessons.

Translating these lessons was a big effort. Many thanks are due to both the major contributions made by Sarah Supp, Diego Barneche, and Gavin Simpson, as well as the contributions made by Carl Boettiger, Josh Ainsley, Daniel Chen, Bernhard Konrad, and Jeff Hollister (please let me know if I missed your contribution/review).

On language-agnostic lesson sets

The current set of novice R lessons fulfill the vision described in a summary of a meeting back in October 2012:

There is a general belief that SWC should be "language agnostic" and primarily teach the computing skills that transcend individual programming languages.

In general, the R-based workshops should reuse as much material as possible from the existing curriculum and contribute language-agnostic improvements and new lessons back to the "main" Python-based lesson set.

Dan Braithwaite and I recently taught these lessons at a workshop for novice biologists, and it went very well. Even though we weren't able to get through the entire lesson on command-line programs, it was very satisfying to see all the lightbulbs go off as they made the connection between the commands they were running in the shell the day before and the R code they wrote the second day (if you're interested, see this thread for more details on how the workshop went).

While focusing on language-agnostic principles enables us to cover lots of big ideas that are the core of Software Carpentry's mission like modular programming and automation, this means sacrificing the discussion of many important R-specific features. This has disappointed some R instructors. The dissenting view can be summarized by some recent posts to the r-discuss mailing list:

Dirk Eddelbuettel wrote (post):

Should we not play to R's strength, rather than to another languages's weaknesses?
And Gavin Simpson added (post):
We seem to be compromising R and an R-like approach just to maintain compatibility with the python lessons.

Thus we have two opposing philosophies. One wants to focus solely on the principles that transcend programming languages; while the other wants to teach the best practices through a more idiomatic approach.

A call for proposals

We set out about seven months ago to create a set of lessons that would be developed, maintained, and used by everyone in the Software Carpentry community who is teaching R. Having built these lessons, I now question whether that was the right goal.

First, while it was an accomplishment to finally have novice R lessons on the SWC website, the work of translating the materials ended up being done by only a few people, and only a few instructors have actually taught these materials in their workshops. Second, there is now another option for running R-based workshops, Data Carpentry. Thus, the common debate on how much we should focus on programming best practices versus data analysis skills has been somewhat solved: a Software Carpentry workshop should focus on programming best practices and a Data Carpentry workshop should focus on data anaylsis skills.

Third, in the wider SWC community, we are currently in the process of overhauling just about everything. The plan is to split up the bc repo, which will result in a new template for workshop websites and a new template for lesson material. One of the motivations for this effort is that instructors want the flexibility to add domain-specific data and introduce topics in an order that makes the most sense to them.

So instead of upfront trying to democratically choose a compromised solution for R-based workshops, let's try a more distributed approach. Any SWC R instructor can propose a new set of lessons and recruit other interested instructors to help create them. Once the lessons are finished, they can be submitted for official approval to be taught in Software Carpentry and/or Data Carpentry workshops (this approval process is also under development). With this approach, each set of R lessons will be maintained in proportion to the number of instructors interested in teaching them.

If you have an idea for a new set of R lessons, please send your proposal to the r-discuss mailing list. You should include a basic outline of your approach and what you intend to cover. For an example, please check out Scott Ritchie's blog post where he outlines his idea for a set of R lessons. In addition to describing your approach, it would be useful to include the answers to the following questions:

  • How much time will it take to teach the lessons?
  • Are the lessons intended for Software Carpentry, Data Carpentry, or both?
  • How can other instructors help? Do you need others to help create and/or review the lessons?
  • What learners are you targeting? Novice, intermediate, advanced? A specific discipline?

Hopefully this approach will lead to multiple sets of R lessons available for use in our workshops. I look forward to seeing the new proposals!

Mozilla IndiaFirefox OS Bus Day 2: Kochi here we are!

How it started?

After the awesome bus tour which was just the beginning, we drove for more than 10 hrs from Vellore to Kochi which was our next stop for spreading the word and doing much more. In the morning we stopped at a Restaurant to freshen up and rejuvenate ourselves with all the energy and give our best.
Upon reaching, Abid and the mozillians (Binoy, Anush) who had the whole plan setup making the mobilizers attain more better stability and go get right away get to the campaign. So, we directly headed towards to the Startup Village AKA the Silicon Valley Of Kerala.

The campaign

This Startup village plays host to a large number of tech and dev centric startup. The Mobilizer crew divided themselves in a block of 3 people and started talking to all the people who were working in every floor. The great coordination of regional mozillians had a great role in the perfect execution of the plans.

It was about 1300hrs when we finished the 10k startup building and went for some heavy snacks before traveling to Cocin University Science and Technology, which was again played host to Maker Party Kochi. So by the time we reached Cocin University Science and Technology (CUSAT) it has already started raining heavily so we had to dress up the mascot and get along with the activities. It turned out to be extremely successful when we saw so many people coming to pose with the mascot and many promising developers asking us about how they would help by contributing to marketplace. The day ended by giving out swags and Loads of pictures with foxy being posted with hashtag #FirefoxOSBus on the social media.

The Fun and Promise

After a short while headed to for a music festival of CUSAT where we had a nice evening all thanks to the volunteers for getting us the passes. The dinner was at Majlish, an Arabian Restaurant which was damn delicious. That’s how another day comes to an end , promising another morning where we would explore , learn and share the word “Firefox OS” with the prime believe that it will not only blaze the path but will also bring a revolution in the life of the next 2 billion who are about to come online in the near future. Now, we are resting the night in a hotel from where we will depart for Bangalore early morning Via Mysore and few more places.

Mozilla IndiaFirefox OS Bus Day 1: Fox On The Road

As per scheduled, we all landed in Hyderabad. Well, to be precise ‘we’ here refer to the 8 awesome mozillians from all over India who are a part of the mobilizer crew in the FirefoxOS bus.

Who were the awesome people?

Although we all had different flight timings we managed to gather in ‘Collab House‘, Jubilee Hills, Hyderabad. Sumantro and me arrived from Kolkata. We were warmly welcomed by Abid, Mission Commander of FirefoxOS bus. Soon after our arrival, Dipesh from Udaipur, Mrinal from Indore and Akshay from Hyderabad arrived. We were eagerly waiting for Vineel, our Head of the crew. After his arrival Vikas, logistics lead was there followed by some enthusiastic mozillians from Hyderabad.

Course of the day

We started of with the initial introduction and then we carried on with our work. There were loads of work left before getting onboard. We quickly grabbed our dinner at around 8.30pm. Only after finishing off with dinner we got to see the amazing FirefoxOs bus. The magnificent view of the bus made all of us really happy and excited. After some quick photo sessions, finally we started our journey.

Leaving for Chennai- first destination

Bidding goodbye to Hyderabad, we left for Chennai, Tamil Nadu. Traveling for almost eight hours, we stopped at ‘Ongole’. Getting some refreshments we continued our journey for Chennai. Our first destination was VIT, Chennai campus. The energetic, excited and awesome regional coordinators Gauthamraj and Tejdeep were eagerly waiting for us. Although, we lagged in time but nevertheless, We appreciate the students of VIT for their warm welcome and participation made us feel like home. The entire session was nicely conducted by all the crew members. The regional coordinators along with the volunteers (FSA’S) need a special mention for coordinating the event so well. On the other side, Sai Kiran, another Mozillian from Warangal, joined us. Last but no way the least, without the amazing participants FirefoxOS program would not have been successful.

 What attracted people?

The special attraction was the fox mascot. People got attracted towards the presence of the sweet Foxy. There was a huge queue for having a selfie session with the mascot. The best part about VIT Chennai campus was that soon after we left the campus we saw posts coming on #FirefoxOsbus.

After Chennai campus we started our journey towards VIT vellore campus. It took almost 3 hours for the FirefoxOs bus to reach VIT Vellore. We were really late but were happy to see the enthusiastic team waiting for us. With the help of the coordinators we successfully completed FirefoxOS campaign in Vellore. Thanks to Jaykumar and Kasish for helping us organize it in Vellore.

Once again props to all the Mozillians in and around Tamil Nadu who made this #FirefoxOSBus a grand success.

Had a lip smacking dinner in Olive Kitchen, Vellore and from there, we started our journey towards Kochi which is nearly 800 km. Long night but worth it. ;)

…and miles to go before we sleep.
Contributed By
Sreemegha Guha and Akshay Tiwari

Rumbling Edge - Thunderbird2014-10-17 Calendar builds

Common (excluding Website bugs)-specific: (2)

  • Fixed: 1061768 – BuildID in em:updateURL and UI is empty, seems that @GRE_BUILDID@ is not set during build
  • Fixed: 1076859 – fix compiler warnings in libical

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

Rumbling Edge - Thunderbird2014-10-17 Thunderbird comm-central builds

Thunderbird-specific: (21)

  • Fixed: 736002 – The editor for twitter should show inputtable character count
  • Fixed: 1016000 – Remove uses of arguments.callee in /mail (except /mail/test/*)
  • Fixed: 1025316 – Port |Bug 1016132 – fuelApplication.js – mutating the [[Prototype]] of an object will cause your code to run very slowly; instead create the object with the correct initial [[Prototype]] value using Object.create| to Thunderbird for steelApplication.js
  • Fixed: 1036592 – Thunderbird does not respect “Skip Integration”
  • Fixed: 1039963 – TEST-UNEXPECTED-FAIL | test-newmailaccount.js::test_show_tos_privacy_links_for_selected_providers.js
  • Fixed: 1059927 – Extend the inverted icon logic from bug 1046563 to AB, Composer and Lightning
  • Fixed: 1061648 – Mailing list display does not refresh correctly after addresses are deleted
  • Fixed: 1066551 – Add styling for .menulist-menupopup and .menulist-compact removed by bug 1030644
  • Fixed: 1067089 – Port bug 544672 and bug 621873 to Thunderbird – Pin icon on Win8 and don’t propose Quick Launch Bar on Win7+
  • Fixed: 1070614 – Fix some TypeErrors and SyntaxErrors seen in JS strict mode when running mozmill tests
  • Fixed: 1071069 – Thunderbird PFS removal – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/mozmill/content-tabs/test-plugin-unknown.js | test-plugin-unknown.js::test_unknown_plugin_notification_inline | test-plugin-unknown.js::test_unknown_plugin_notification_bar
  • Fixed: 1072652 – Update removed-files for the move from Contents/MacOS to Contents/Resources
  • Fixed: 1073951 – octal literals and octal escape sequences are deprecated: … mozmill/extension/resource/modules/utils.js
  • Fixed: 1073955 – octal literals and octal escape sequences are deprecated: …resource://mozmill/stdlib/httpd.js
  • Fixed: 1074002 – Modify file structure of Thunderbird.app to allow for OSX v2 signing
  • Fixed: 1074006 – Get Thunderbird to launch with the new .app bundle structure
  • Fixed: 1074011 – Thunderbird’s preprocessed channel-prefs.js file needs to be the same for each build
  • Fixed: 1074814 – Fix some strict JS warnings in mail/base/modules
  • Fixed: 1082722 – Remove mozilla-xremote-client from our packages.
  • Fixed: 1083153 – EarlyBird not correctly signed, and doesn’t start up at all
  • Fixed: 1083196 – IM: Lists are broken in Chat

MailNews Core-specific: (11)

  • Fixed: 998189 – Add a basic structured header interface
  • Fixed: 1047883 – Modify test_offlinePlayback.js to use promises.
  • Fixed: 1062235 – Port bug 1062221 (kill add_tier_dir) to comm-central
  • Fixed: 1067116 – compile failes nsEudoraFilters.cpp on case-sensitive HFS+ filesystem
  • Fixed: 1070261 – Improve appearance of Advanced settings of an IMAP account
  • Fixed: 1071497 – error: no matching function for call to ‘NS_NewStreamLoader(nsGetterAddRefs<nsIStreamLoader>, nsCOMPtr<nsIURI>&, nsAbContentHandler* const, nsIInterfaceRequestor*&)’
  • Fixed: 1074034 – Simplify the comm-central build-system post pseudo-rework
  • Fixed: 1074585 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/xpcshell/tests/mailnews/compose/test/unit/test_detectAttachmentCharset.js | “Shift_JIS” == “UTF-8″ – See following stack:
  • Fixed: 1078524 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/xpcshell/tests/mailnews/import/test/unit/test_shiftjis_csv.js
  • Fixed: 1080351 – Fix compiler errors caused by bug 1076698
  • Fixed: 1083487 – /usr/bin/m4:./aclocal.m4:7: cannot open `mozilla/build/autoconf/ccache.m4′: No such file or directory

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

Air MozillaWebdev Beer and Tell: October 2014

Webdev Beer and Tell: October 2014 Web developers across the Mozilla community get together (in person and virtually) to share what side projects or cool stuff we've been working on.

WebmakerMozFest 2014 kicks off in one week!

MozFest logo copy

It’s Almost Time!

MozFest — Mozilla’s annual hands-on festival dedicated to forging the future of the open, global web — is about to begin.

This year’s festival, which takes place in London from October 24 – 26, will be packed with passionate technologists and creators eager to share their skills and hack on innovative digital solutions for the web’s most pressing issues.

The Web Is Vulnerable

It’s no secret that the web as a free and open public resource is under threat. Governments and corporations are vying for control, leaving web users across the globe struggling to protect not only their own personal online security, but the integrity of the Internet as a whole. As billions more people come online in the next decade thanks to affordable mobile technologies, is their web going to be open or closed? Decentralized or controlled? Will they be passive consumers or empowered creators? More and more people are realizing we need to step in and save the web, but that’s only going to happen if more of us are fighting.

Together We Are Strong

The good news is that hundreds of thousands of people, organizations and communities around the world are eager to help with this mission. MozFest is about imagining how we can work together. How can citizens of the web in communities around the world be empowered to take action? MozFest participants will tackle these challenges not just by talking about them, but by building new ways to teach and engage everyone in making the web together.

Hacking Practical Solutions

MozFest is where people who love the open web collaborate to envision how it can do more, and do better. The motto of the festival is Less Yack, More Hack which results in a focus on identifying current challenges and developing practical solutions. This year, MozFest will feature 11 themed tracks:

  • The Mobile Web
  • Policy & Advocacy
  • Community Building
  • Build and Teach the Web
  • Open Web With Things
  • Source Code for Journalism
  • Science and the Web
  • Art and Culture of the Web
  • Open Badges Lab
  • Hive Learning Networks
  • Musicians and Music Creators on the Open Web

Scores of individual sessions will be held as part of each track. Here’s just a taste of the sessions participants will be hacking:

  • How the next 1 billion internet users will bring their online ideas to life
  • Helping 10 million young people become digitally literate
  • Design your first mobile app
  • Hacking the gender gap
  • Using badges to support the delivery of the new computing curriculum
  • User privacy and security on the web
  • Let’s build an unbreakable internet
  • Making open web a part of the curriculum
  • I was born with the web – 25 under 25
  • How to get into the correct amount of trouble online

Our aim this year is to showcase and develop best practices for community leadership. Join us in discovering how distributed organizing and sharing skills through teaching and learning can build a web filled with opportunity for all!

Get Involved:

 

SUMO BlogWhat’s up with SUMO – 18 October

Hello, SUMO and the www (whole wide world). Here are the latest and greatest updates from SUMO headquarters, situated in cyberspace.

New arrivals to SUMO – we salute you!

Latest SUMO Community meeting video

Our latest meeting focused on SUMO presence in the online world. Watch the video to learn more.

If you want to comment on the video or ask questions regarding the discussion, please do so in the forum thread. Also, please remember that you’re always invited to join our Monday meetings, and we’re very happy when you do.

SUMO Day summary

It took place yesterday, as you may remember… We had a high number of questions and managed to answer 90% of them in 24 hours!

Here’s a list with the great people that made this SUMO Day awesome:

fredmcd-hotmail
jscher2000
cor-el
the-edmeister
philipp
rmcguigan
James
ARAMVA097
Airmail
ouesten
MattAuSupport
ideato
CoryMH
cbaba20
christ1
Gingerbread_Man
hpmini2009
Sayantanmozillian
tanoota
SwamiS
finitarry
jmjimmitchell
sdlokie
rob44
sfhowes
toddy_victor
lillypad
SharonS
eokamura
shakynot
sepharad
codygotkilld
zeek99
kbrosnan
Padme
JohnGB
tex311
tonyc1984
bargaincrusader
lizhenry
SteveMilward
KadirTopal
AOK1
mniemann
Aero312
Bakshara
plantron
Toad-Hall
Muhammad_Faizan
Robing71
hey-bud
aubbieed
juraL
Hackie2
hepatica1
ElectronicTrader
Zenos
Gerry_D
baileyboy
scifihi

Can we make it to 100% next time?

KB updates

The KB dashboard is getting a makeover (thanks to Rehan, Kadir, and Ricky!). You can see the upcoming changes at https://support-dev.allizom.org/en-US/contributors.

If you have feedback about, please leave it here: https://bugzilla.mozilla.org/show_bug.cgi?id=1068572

Firefox OS goes feature complete for 2.1 and requires localization for 2.0

While FxOS 2.1 reached feature completion (on the 13th of October), we are kicking off a more focused localization cycle for 2.0. Localizers, please subscribe to this thread, with an upcoming update preview available here. Got questions or comments? Add them to the update thread and we’ll get back to you.

Shout-out time: Bangladesh l10ns!

Just in case you forgot how amazing the SUMO community in Bangladesh is… They kicked off their “Mission 100% SUMO KB” initiative recently, and we’re eagerly awaiting updates from the land of Bengal. Hats off to the Bangladesh l10ns!

And that’s it for the most recent round of updates… See you on Monday and/or on Twitter: we’re there at https://twitter.com/sumo_mozilla.

Software CarpentryNum Wrongs Plus Plus

I was teaching Git to a room of roughly 25 students on day 2 of a Software Carpentry workshop and we ran into a problem that feels like a case study on the reason it's hard to move science to safer practices.

If you've taught Git before, you know that the first thing you do is have everyone open the shell, navigate to a new directory somewhere (remember how to do that from yesterday?) and type git init. A few hands will shoot up, and red sticky notes start to flower. If you are lucky, you have less people in need of help than you have helpers. We weren't so lucky.

Now what? Do you pause the lesson, let everyone get good and comfortable on Facebook while you jump in to help? Do you keep going and trust that the people whose Git isn't working will be able to catch up? I paused the lesson to help out. I've taught Git five or six times and I do not remember a single case of a student getting a late start on Git that was still with me by the first coffee break.

I think if we're honest, we accidently assume that a lot of the problems with installing Git are the learner. Maybe they didn't read the memo about installing it? It happens. Maybe they should have asked for help before I started my lesson? They've only known you could type into the terminal for 24 hours. Maybe we should provide better tools so they know they even have a problem? See previous answer.

Maybe the reason Git fails to install on a few machines every workshop is because software installation is just so incredibly broken.

This is the output that our student saw:

user173-85:~ user-name$ git init
dyld: lazy symbol binding failed: Symbol not found: ___strlcpy_chk
  Referenced from: /usr/local/git/bin/git
  Expected in: /usr/lib/libSystem.B.dylib

dyld: Symbol not found: ___strlcpy_chk
  Referenced from: /usr/local/git/bin/git
  Expected in: /usr/lib/libSystem.B.dylib

Trace/BPT trap: 5

Remember that lofty goal about de-mystifying computers so that smart people can use them to do lofty work? It's in danger.

One of our goals is to teach how to get "unstuck." We search for "git dyld: lazy symbol binding failed: Symbol not found: ___strlcpy_chk". We explain, loud enough for a few more interested neighbors to overhear, that stackoverflow.com is the place to go for this sort of information, and that they should start there.

Stack Overflow tells us to install Xcode. Right! Instructors don't think of things like that because instructors installed that ages ago. We gloss over what Xcode is and why the student needs it while they are navigating to the Mac Store. We shrug when the person in the seat didn't need to install Xcode and his seems to be working fine.

They find Xcode and start installing. I'm a little nervous about what our iffy wireless will think of a 4.4 Gbyte install and whether this will tank the GitHub module coming up in an hour.

Except that Xcode, which is in the store, will not install because this version of OSX is too old. (I believe it was version 10.8.) Instructors don't run into this because even those of us with creaky old machines were set up with programming tools ages ago.

Back to the internet, this time with less confidence. At that point, we were looking through the apple website, explaining why the "probablynotspyware.net" ad on google isn't a good idea, and growing more and more nervous about the amount of time this is taking. The student actually suggested the final fix: "why don't I just run the installation instructions for windows? I have it in parallels." Genius! This install worked as advertised, and the student finished the entire lesson running a GitBash-enabled Windows cmd shell on her Mac. Fortunately, she was able to keep the file systems straight.

So that's the time that we fixed an OSX install issue by using the Windows shell instead. Or as they say in C++:

while(num_wrongs != right) {
    num_wrongs++;
}

Full disclosure: the author works at Microsoft. Software Carpentry takes an intentionally unopinionated view about the OS that our learners use. Our hope is that they learn to use it better.

Mozilla UXRe-imagine Firefox on tablet

Nowadays mobile space is dominated by applications that are created for mobile phones. Designers often start designing an application for phones first and then scale it for tablets. However, people interact with tablets quite differently due to the unique context of use. For a browser on tablet, you may find people use it in a kitchen for recipes, on a couch reading or shopping, or at home streaming music and videos. How can Firefox innovate and re-imagine the experience for tablet users?

Reimagine-Firefox-on-tablet_banner

Design process

Beginning in January 2014, mobile Firefox UX designers started envisioning solutions for an interesting challenge: a Firefox browser that is optimized for tablet-specific use cases and takes full advantage of tablet form factor.

The team defined two main user experience goals as the first milestone of this project.

Questions.001 Questions.002

To quickly test our design hypothesis for these two goals, I came up with a 10-day sprint model (inspired by Google Venture’s 5-day sprint) for the mobile Firefox UX team. I prototyped a few HTML5 concepts (GIF version) using Hype and published them on usertesting.com to get initial feedback from Android users.

What we learned from the sprint testings:

  1. Desktop controls were familiar to participants and they adopted them quickly
  2. Visual affordance built expectations
  3. Preview of individual tabs was helpful for tab switching
  4. Tab groups met the needs of a small set of tablet users
  5. Onscreen controls required additional time to get familiar with

Based on what we have learned from design sprints[full report], I put together an interaction design proposal for this redesign[full presentation]. To help myself and the rest of the team understand the scope of this redesign, I divided the work into a few parts, from fundamental structure to detailed interactions. My teammate Anthony Lam has been working closely with me, focusing on the visual design of the new UI.

Design Solution

The new Firefox on tablet achieves a good balance between simplicity and power by offering a horizontal tab strip and a full-screen tab panel. Designed for both landscape and portrait use, the new interface takes full advantage of the screen space on tablet to deliver a delightful experience. Here are some of the highlights.

1. Your frequently used actions are one tap away

The new interface features a horizontal tab strip that surfaces your frequent browsing actions, such as switching tabs, opening a new tab, closing a tab.

Tablet Refresh Presentation.001

2. Big screen invites gestures and advanced features

A full-screen tab panel gives a better visual representation of your normal and private browsing sessions. Taking advantage of the big space, the panel can also be a foundation for more advanced options, such as tab groups, gestural actions for tabs.

Tablet Refresh Presentation.002

3. Make sense of the Web through enhanced search

The new tablet interface will offer a simple and convenient search experience. The enhanced search overlay is powered by search history, search suggestions, your browsing history and bookmarks. You will be able to add search engines of your choice and surface them on the search result overlay.

Tablet Refresh Presentation.003

4. You have control over privacy as always

Private browsing allows you to browse the Internet without saving any information about which sites and pages you’ve visited.

Tablet Refresh Presentation.004.png.001

Future concepts

Besides basic tab structure and interactions, I have also experimented with some gestural actions for tabs. You can view some animations of those experiments via this link. I also included a list below with links to Bugzilla. If there is a concept that sounds interesting to you, feel free to post your thoughts and help us make it happen!

  • Add a new tab by long-tapping on the empty space of horizontal tab strip [Bug 1015467]
  • Pin a tab on horizontal tab strip [Bug 1018481]
  • Visual previews for horizontal tabs [Bug 1018493]
  • Blur effect for private tab thumbnails [Bug 1018456]

 

The big picture

Many of the highlighted features above, such as enhanced search, gestural shortcuts, can be adopted by Firefox Android on phone. And you may have noticed the new interface was heavily influenced by the simple and beautiful new look of Firefox on desktop.

Based on screen-sizes, tablet is a perfect platform for merging consistency in between desktop and phone. Focusing on the context of tablet use, Firefox Android on tablet will establish itself as a standalone product of the Firefox family. We are excited to see a re-imagined tablet experience make Firefox feel more like one product — more Firefoxy — across all our platforms, desktop to tablet to phone.

Tablet Refresh Presentation.005

 

Currently the mobile Firefox team is busy bringing those ideas to life. You can check out our progress by downloading Firefox Nightly build to your Android tablet and choose “Enable new tablet UI” in the Settings. And stay tuned for more awesomeness about this project from Anthony Lam, Lucas Rocha, Martyn Haigh, and Michael Comella!

WebmakerQ&A with Maker State

Every year we get the opportunity to connect with many great organizations who are spreading web literacy around the world at all times of the year. MakerState, hands-on makerspaces in New York City, is a perfect example. We had a chance to sit down with the founder of MakerState, Stephen Gilman to talk about what they’ve done in the past few months and the upcoming events they have planned for continuous making.

makerspace kids smaller

What is your organization and what do you do?

MakerState empowers kids ages 5-18 with science, technology, engineering, arts, and math (STEAM) passion and skill through makerspaces in robot engineering, fashion/wearable electronics, video game design, paper circuits, 3D prototyping and printing, comic book creation, and moviemaking. MakerState hosts makerspaces nationwide in schools and after-school programs as well as community workshops, pop up makerspaces, and summer camps.

What are the events you hosted or ran this year?

We hosted over 30 makerspaces this year in schools and community centers in New York, New Orleans, San Francisco, Boston, New Haven…and hopefully coming to your town soon!

Why did you choose to get involved with Maker Party?

We are a community of makers and educators  who believe that all learning can happen through building, creating, hacking, inventing…through making. We are committed to bringing as many maker-learning experiences as possible to kids and Maker Party is a perfect partner for us in that effort. Whether we’re doing pop up makerspaces with Maker Party or ongoing school-based makerspaces throughout the year, we’re excited to be Maker Party hosts.

What is the most exciting thing about running events?

Our favorite moment in the makerspace is when a young person, maybe five, six, seven years old, finds a maker project that they really love and becomes completely immersed in it. They are creating and building and learning science, engineering, design, or programming at the same time. But it’s the total immersion and joy that is so captivating to observe. Psychologist Mihaly Csikszentmihalyi has called that moment the “flow state”—we call it the maker state.

Why is it important for youth and adults to make things with technology?

We see technology as the tools and media humans use to create art, new products, and to interact with others. Tech is how people literally live their lives. Tech can also save lives and bring us joy and allow us to pursue common dreams. There is a darker side to tech too: polluting, disintegrating, even destroying life. We teach kids the power of tech and tool-making so that they understand how to create new technology and benefit from it. Ultimately, it’s about moving young people from passive consumption of tech to become the pro-active, socially responsible creators of it. We’re convinced that this generation of kids we’re working with will create safe forms of energy, life-saving medical treatments, and new forms of media that draw humanity together for peace and productivity. If we can engage kids at a young enough age and build skills, confidence and passions around tech, they will blow our minds with the new world they create.

What is the feedback you usually get from people who attend or teach at your events?

It’s so fun to observe parents as they watch their kids in the makerspace. I like to step back from the kids sometimes and stand beside their parents as they marvel at what their kids are building. The universal reaction: I can’t believe how much she loves this project. I’m so impressed with what my son has built. I wish their whole school experience could be like this. We agree!

Why is it important for people and organizations to get involved with Maker Party and teaching the web?

Maker Party gives kids and communities an opportunity to explore hands on creativity with technology, often for the first time. This experience is invaluable for young people—often it is life changing. It’s the moment a young girl realizes she can become an engineer and build her world. The moment an inner city student realizes the total joy of science and the rewarding life he can live in pursuit of new ideas and new solutions to human challenges. Maker Party offers these life-changing moments to young people and we are proud to be a part of the movement.

How can people get in touch with your organization?

To start a STEM-mastery makerspace in your school or host a summer camp, contact MakerState at info@maker-state.com.

Air MozillaAscend Project Final Presentations - Portland Cohort

Ascend Project Final Presentations - Portland Cohort 5 minute lightning talks by new contributors to Mozilla who just completed the first ever Ascend Project.

The Mozilla BlogMozilla and Telefónica Partner to Simplify Voice and Video Calls on the Web

Mozilla is extending its relationship with Telefonica by making it easier than ever to communicate on the Web. Telefónica has been an invaluable partner in helping Mozilla develop and bring Firefox OS to market with 12 devices now available in … Continue reading