Planet Mozilla Interns

June 01, 2016

Brian Krausz

Demystifying Startup Job Offers

I’ve been giving a talk on startup offers for a while, and it’s picked up a bit. I have a recording of it that I gave to a bunch of Waterloo interns 18 months ago, though with dubious video quality. The first half is about difference between large and small companies, and the second half is about how options work.

Demystifying Startup Job Offers was originally published in Stories by Brian Krausz on Medium, where people are continuing the conversation by highlighting and responding to this story.

June 01, 2016 04:56 PM

March 09, 2016

Michael Sullivan

MicroKanren (μKanren) in Haskell

Our PL reading group read the paper “μKanren: A Minimal Functional Core for Relational Programming” this week. It presents a minimalist logic programming language in Scheme in 39 lines of code. Since none of us are really Schemers, a bunch of us quickly set about porting the code to our personal pet languages. Chris Martens produced this SML version. I hacked up a version in Haskell.

The most interesting part about this was the mistake I made in the initial version. To deal with recursion and potentially infinite search trees, the Scheme version allows some laziness; streams of results can be functions that delay search until forced; when a Scheme μKanren program wants to create a recursive relation it needs wrap the recursive call in a dummy function (and plumb through the input state); the Scheme version wraps this in a macro called Zzz to make doing it more palatable. I originally thought that all of this could be dispensed with in Haskell; since Haskell is lazy, no special work needs to be done to prevent self reference from causing an infinite loop. It served an important secondary purpose, though: providing a way to detect recursion so that we can switch which branch of the tree we are exploring. Without this, although the fives test below works, the fivesRev test infinite loops without producing anything.

The initial version was also more generalized. The type signatures allowed for operating over any MonadPlus, thus allowing pluggable search strategies. KList was just a newtype wrapper around lists. When I had to add delay I could have defined a new MonadPlusDelay typeclass and parametrized over that, but it didn’t’ seem worthwhile.

A mildly golfed version that drops blank lines, type annotations, comments, aliases, and test code clocks in at 33 lines.

View the code on Gist.

March 09, 2016 12:10 AM

November 05, 2015

Michael Sullivan

End of summer Rust internship slides from 2011-2013

Slides from my end of summer talks for all of my Rust internships. (This post heavily backdated and the slides are hilariously out of date.)

2011 – Closures for Rust

2012 – Vector Reform and Static Typeclass Methods

2013 – Default Methods in Rust


November 05, 2015 10:58 PM

July 05, 2015

Jonathan Wilde

Gossamer Sprint Two, Day Three

Want more context? See the introduction to Gossamer and previous update.

Another Demo

Let’s say you make a code change to your browser and you want it today. After making your change, you need to restart the app, or in the case of browser.html clear caches and refresh the page.

With our experimental fork of browser.html, we can now apply a lot of different types of changes without a refresh.

Let’s say we want to change the experiments icon in the upper right of our browser and make it red and larger. You just make the change and hit save. The changes appear in your running browser, without any loss of state.

We’re doing this with Webpack Hot Module Replacement and React Hot Loader.

In the demo, I’m running browser.html from Webpack’s development server. It watches and serves the browser.html files from my working copy, performs incremental module builds, and has an open connection to the browser notifying it of build status.

When the working copy changes, it performs an incremental build and notifies the browser of new code. The browser can apply the changes without a restart.

What I Did on Saturday

Next Steps

July 05, 2015 12:00 AM

July 04, 2015

Jonathan Wilde

Gossamer Sprint Two, Days One and Two

I’m currently working with Lyre Calliope on a project to improve tooling for developing and sharing web browser features.

I’ll be documenting my progress on this Mediapublic-style.

First, a Little Demo

In order to tinker with your web browser’s source today, you need to download a working copy of the source, set up a build environment, and have your text editor selected and configured. It can take hours, even for people who’ve done it before.

Why can’t we just edit and share web browser UI changes from a web application, like we can with documents and other things?

In our experimental fork of browser.html, we can up the GitHub web interface (even from the browser you’re trying to edit), make edit the color, and when the update popup appears in the web browser, click “Apply”.

We don’t have to configure Gossamer to continuously build and ship our branches, and other people testing the same Gossamer branch receive that update, too.

In case you’re curious, here’s the commit I made in the demo.

What I Did Thursday and Friday


Stay tuned for more!

July 04, 2015 12:00 AM

July 02, 2015

Jonathan Wilde

A Project Called Gossamer

A few summers back, I worked on the Firefox Metro project. The big challenge I ran into the first summer–back when the project was at a very early stage–was figuring out how to distribute early builds.

I wanted to quickly test work-in-progress builds across different devices on my desk without having to maintain a working copy and rebuild on every device. Later on, I also wanted to quickly distribute builds to other folks, too.

I had a short-term hack based on Dropbox, batch scripts, and hope. It was successful at getting rapid builds out, but janky and unscalable.

The underlying problem space–how you build, distribute, and test experimental prototypes rapidly?–is one that I’ve been wanting to revisit for a while.

So, Gossamer

This summer, Lyre Calliope and I have had some spare time to tinker on this for fun.

We call this project Gossamer, in honor of the Gossamer Albatross, a success story in applying rapid prototyping methodology to building a human-powered airplane.

We’re working to enable the following development cycle:

  1. Build a prototype in a few hours or maybe a couple days, and at the maximum fidelity possible–featuring real user data, instead of placeholders.
  2. Share the prototype with testers as easily as sharing a web page.
  3. Understand how the prototype is performing in user testing relative to the status quo, qualitatively and quantitatively.
  4. Polish and move ideas that work into the real world in days or possibly weeks, instead of months or years.

A First Proof-of-Concept

We started by working to build a simple end-to-end demonstration of a lightweight prototyping workflow:

(Yeah, it took longer than two weeks due to personal emergencies on my end.)

We tinkered around with a few different ways to do this.

Our proof-of-concept is a simple distribution service that wraps Mozilla’s browser.html project. It’s a little bit like TestFlight or HockeyApp, but for web browsers.

To try an experimental build, you log in via GitHub, and pick the build that you want to test…and presto!

Sequence shortened.

About the login step: When you pick an experiment, you’re picking it for all of your devices logged in via that account.

This makes cross-device feature testing a bit easier. Suppose you have a feature you want to test on different form factors because the feature is responsive to screen dimensions or input methods. Or suppose you’re building a task continuity feature that you need to test on multiple devices. Having the same experiment running on all the devices of your account makes this testing much easier.

It also enables us to have a remote one-click escape hatch in case something breaks in the experiment you’re running. (It happens to the best developers!)

To ensure that you can trust experiments on Gossamer, we integrated the login system with Mozillians. Only vouched Mozillians can ship experimental code via Gossamer.

To ship an experimental build…you click the “Ship” button. Boom. The user gets a message asking them if they want to apply the update.

And the cool thing about browser.html being a web application…is that when the user clicks the “Apply” button to accept the update…all we have to do is refresh the window.

We did some lightweight user testing by having Lyre (who hadn’t seen any of the implementation yet) step through the full install process and receive a new updated build from me remotely.

We learned a few things from this.

What We’re Working On Next

There’s three big points we want to focus on in the next milestone:

  1. Streamline every step. The build service web app should fade away and just be hidden glue around a web browser- and GitHub-centric workflow.
  2. Remove the refresh during updates. Tooling for preserving application state while making hot code changes to web applications based on React (such as browser.html!) is widely available.
  3. Make the build pipeline as fast as possible. Let’s see how short we can make the delay from pushing new code to GitHub (or editing through GitHub’s web interface) to updates appearing on your machine.

We also want to shift our mode of demo from screencasts to working prototypes.

Get Involved

This project is still at a very early stage, but if you’d like to browse the code, it’s in three GitHub repositories:

Most importantly, we’d love your feedback:

There’s a lot of awesome in the pipeline. Stay tuned!

July 02, 2015 12:00 AM

February 25, 2015

Michael Sullivan

Parallelizing compiles without parallelizing linking – using make

I have to build LLVM and Clang a lot for my research. Clang/LLVM is quite large and takes a long time to build if I don’t use -j8 or so to parallelize the build; but I also quickly discovered that parallelizing the build didn’t work either! I work on a laptop with 8gb of RAM and while this can easily handle 8 parallel compiles, 8 parallel links plus Firefox and Emacs and everything else is a one way ticket to swap town.

So I set about finding a way to parallelize the compiles but not the links. Here I am focusing on building an existing project. There are probably nicer ways that someone writing the Makefile could use to make this easier for people or the default, but I haven’t really thought about that.

My first attempt was the hacky (while ! pgrep ld.bfd.real; do sleep 1; done; killall make ld.bfd.real) & make -j8; sleep 2; make. Here we wait until a linker has run, kill make, then rerun make without parallel execution. I expanded this into a more general script:

View the code on Gist.

This approach is kind of terrible. It’s really hacky, it has a concurrency bug (that I would fix if the whole thing wasn’t already so bad), and it slows things down way more than necessary; as soon as one link has started, nothing more is done in parallel.

A better approach is by using locking to make sure only one link command can run at a time. There is a handy command, flock, that does just that: it uses a file link to serialize execution of a command. We can just replace the Makefile’s linker command with a command that calls flock and everything will sort itself out. Unfortunately there is no totally standard way for Makefiles to represent how they do linking, so some Makefile source diving becomes necessary. (Many use $(LD); LLVM does not.) With LLVM, the following works: make -j8 'Link=flock /tmp/llvm-build $(Compile.Wrapper) $(CXX) $(CXXFLAGS) $(LD.Flags) $(LDFLAGS) $(TargetCommonOpts) $(Strip)'

That’s kind of nasty, and we can do a bit better. Many projects use $(CC) and/or $(CXX) as their underlying linking command; if we override that with something that uses flock then we’ll wind up serializing compiles as well as links. My hacky solution was to write a wrapper script that scans its arguments for “-c”; if it finds a “-c” it assumes it is a compile, otherwise it assumes it is a link and uses locking. We can then build LLVM with: make -j8 'CXX=lock-linking /tmp/llvm-build-lock clang++'.

View the code on Gist.

Is there a better way to do this sort of thing?

February 25, 2015 08:21 PM

Forcing memory barriers on other CPUs with mprotect(2)

I have something of an unfortunate fondness for indefensible hacks.

Like I discussed in my last post, RCU is a synchronization mechanism that excels at protecting read mostly data. It is a particularly useful technique in operating system kernels because full control of the scheduler permits many fairly simple and very efficient implementations of RCU.

In userspace, the situation is trickier, but still manageable. Mathieu Desnoyers and Paul E. McKenney have built a Userspace RCU library that contains a number of different implementations of userspace RCU. For reasons I won’t get into, efficient read side performance in userspace seems to depend on having a way for a writer to force all of the reader threads to issue a memory barrier. The URCU library has one version that does this using standard primitives: it sends signals to all other threads; in their signal handlers the other threads issue barriers and indicate so; the caller waits until every thread has done so. This is very heavyweight and inefficient because it requires running all of the threads in the process, even those that aren’t currently executing! Any thread that isn’t scheduled now has no reason to execute a barrier: it will execute one as part of getting rescheduled. Mathieu Desnoyers attempted to address this by adding a membarrier() system call to Linux that would force barriers in all other running threads in the process; after more than a dozen posted patches to LKML and a lot of back and forth, it got silently dropped.

While pondering this dilemma I thought of another way to force other threads to issue a barrier: by modifying the page table in a way that would force an invalidation of the Translation Lookaside Buffer (TLB) that caches page table entries! This can be done pretty easily with mprotect or munmap.

Full details in the patch commit message.

February 25, 2015 04:05 AM

Why We Fight

Why We Fight, or

Why Your Language Needs A (Good) Memory Model, or

The Tragedy Of memory_order_consume’s Unimplementability

This, one of the most terrifying technical documents I’ve ever read, is why we fight:


For background, RCU is a mechanism used heavily in the Linux kernel for locking around read-mostly data structures; that is, data structures that are read frequently but fairly infrequently modified. It is a scheme that allows for blazingly fast read-side critical sections (no atomic operations, no memory barriers, not even any writing to cache lines that other CPUs may write to) at the expense of write-side critical sections being quite expensive.

The catch is that writers might be modifying the data structure as readers access it: writers are allowed to modify the data structure (often a linked list) as long as they do not free any memory removed until it is “safe”. Since writers can be modifying data structures as readers are reading from it, without any synchronization between them, we are now in danger of running afoul of memory reordering. In particular, if a writer initializes some structure (say, a routing table entry) and adds it to an RCU protected linked list, it is important that any reader that sees that the entry has been added to the list also sees the writes that initialized the entry! While this will always be the case on the well-behaved x86 processor, architectures like ARM and POWER don’t provide this guarantee.

The simple solution to make the memory order work out is to add barriers on both sides on platforms where it is need: after initializing the object but before adding it to the list and after reading a pointer from the list but before accessing its members (including the next pointer). This cost is totally acceptable on the write-side, but is probably more than we are willing to pay on the read-side. Fortunately, we have an out: essentially all architectures (except for the notoriously poorly behaved Alpha) will not reorder instructions that have a data dependency between them. This means that we can get away with only issuing a barrier on the write-side and taking advantage of the data dependency on the read-side (between loading a pointer to an entry and reading fields out of that entry). In Linux this is implemented with macros “rcu_assign_pointer” (that issues a barrier if necessary, and then writes the pointer) on the write-side and “rcu_dereference” (that reads the value and then issues a barrier on Alpha) on the read-side.

There is a catch, though: the compiler. There is no guarantee that something that looks like a data dependency in your C source code will be compiled as a data dependency. The most obvious way to me that this could happen is by optimizing “r[i ^ i]” or the like into “r[0]”, but there are many other ways, some quite subtle. This document, linked above, is the Linux kernel team’s effort to list all of the ways a compiler might screw you when you are using rcu_dereference, so that you can avoid them.

This is no way to run a railway.

Language Memory Models

Programming by attempting to quantify over all possible optimizations a compiler might perform and avoiding them is a dangerous way to live. It’s easy to mess up, hard to educate people about, and fragile: compiler writers are feverishly working to invent new optimizations that will violate the blithe assumptions of kernel writers! The solution to this sort of problem is that the language needs to provide the set of concurrency primitives that are used as building blocks (so that the compiler can constrain its code transformations as needed) and a memory model describing how they work and how they interact with regular memory accesses (so that programmers can reason about their code). Hans Boehm makes this argument in the well-known paper Threads Cannot be Implemented as a Library.

One of the big new features of C++11 and C11 is a memory model which attempts to make precise what values can be read by threads in concurrent programs and to provide useful tools to programmers at various levels of abstraction and simplicity. It is complicated, and has a lot of moving parts, but overall it is definitely a step forward.

One place it falls short, however, is in its handling of “rcu_dereference” style code, as described above. One of the possible memory orders in C11 is “memory_order_consume”, which establishes an ordering relationship with all operations after it that are data dependent on it. There are two problems here: first, these operations deeply complicate the semantics; the C11 memory model relies heavily on a relation called “happens before” to determine what writes are visible to reads; with consume, this relation is no longer transitive. Yuck! Second, it seems to be nearly unimplementable; tracking down all the dependencies and maintaining them is difficult, and no compiler yet does it; clang and gcc both just emit barriers. So now we have a nasty semantics for our memory model and we’re still stuck trying to reason about all possible optimizations. (There is work being done to try to repair this situation; we will see how it turns out.)

Shameless Plug

My advisor, Karl Crary, and I are working on designing an alternate memory model (called RMC) for C and C++ based on explicitly specifying the execution and visibility constraints that you depend on. We have a paper on it and I gave a talk about it at POPL this year. The paper is mostly about the theory, but the talk tried to be more practical, and I’ll be posting more about RMC shortly. RMC is quite flexible. All of the C++11 model apart from consume can be implemented in terms of RMC (although that’s probably not the best way to use it) and consume style operations are done in a more explicit and more implementable (and implemented!) way.

February 25, 2015 04:05 AM

The x86 Memory Model

Often I’ve found myself wanting to point someone to a description of the x86’s memory model, but there wasn’t any that quite laid it out the way I wanted. So this is my take on how shared memory works on multiprocessor x86 systems. The guts of this description is adapted/copied from “A Better x86 Memory Model: x86-TSO” by Scott Owens, Susmit Sarkar, and Peter Sewell; this presentation strips away most of the math and presents it in a more operational style. Any mistakes are almost certainly mine and not theirs.

Components of the System:

There is a memory subsystem that supports the following operations: store, load, fence, lock, unlock. The memory subsystem contains the following:

  1. Memory: A map from addresses to values
  2. Write buffers: Per-processor lists of (address, value) pairs; these are pending writes, waiting to be sent to memory
  3. “The Lock”: Which processor holds the lock, or None, if it is not held. Roughly speaking, while the lock is held, only the processor that holds it can perform memory operations.

There is a set of processors that execute instructions in program order, dispatching commands to the memory subsystem when they need to do memory operations. Atomic instructions are implemented by taking “the lock”, doing whatever reads and writes are necessary, and then dropping “the lock”. We abstract away from this.


A processor is “not blocked” if either the lock is unheld or it holds the lock.

Memory System Operation

Processors issue commands to the memory subsystem. The subsystem loops, processing commands; each iteration it can pick the command issued by any of the processors to execute. (Each will only have one.) Some of the commands issued by processors may not be eligible to execute because their preconditions do not hold.

  1. If a processor p wants to read from address a and p is not blocked:
    a. If there are no pending writes to a in p’s write buffer, return the value from memory
    b. If there is a pending write to a in p’s write buffer, return the most recent value in the write buffer
  2. If a processor p wants to write value v to address a, add (a, v) to the back of p’s write buffer
  3. At any time, if a processor p is not blocked, the memory subsystem can remove the oldest entry (a, v) from p’s write buffer and update memory so that a maps to v
  4. If a processor p wants to issue a barrier
    a. If the barrier is an MFENCE, p’s write buffer must be empty
    b. If the barrier is an LFENCE/SFENCE, there are no preconditions; these are no-ops **
  5. If a processor p’s wants to lock the lock, the lock must not be held and p’s write buffer must be empty; the lock is set to be p
  6. If a processor p’s wants to unlock the lock, the lock must held by p and p’s write buffer must be empty; the lock is set to be None


So, the only funny business that can happen is that a load can happen before a prior store to a different location has been flushed from the write buffer into memory. This means that if CPU0 executes “x = 1; r0 = y” and CPU1 executes “y = 1; r1 = x”, with x and y both initially zero, we can get “r0 == r1 == 0”.

The common intuition that atomic instructions act like there is an MFENCE before and after them is basically right; MFENCE requires the write buffer to empty before it can execute and so do lock and unlock.

x86 is a pleasure to compile atomics code for. The “release” and “acquire” operations in the C++11 memory model don’t require any fencing to work. Neither do the notions of “execution order” and “visibility order” in my advisor and my RMC memory model.

** The story about LFENCE/SFENCE is a little complicated. Some sources insist that they actually do things. The Cambridge model models them as no-ops. The guarantees that they are documented to provide are just true all the time, though. I think they are useful when using non-temporal memory accesses (which I’ve never done), but not in general.


February 25, 2015 04:04 AM

April 11, 2014

Tiziana Sellitto

Outreach Program For Women a year later

It’s passed a year and a new Summer will begin…a new summer for the women that will be chosen and that will start soon the GNOME’s Outreach Program for Women.

This summer Mozilla will participate with three different projects listed here and among them the Mozilla Bug Wrangler for Desktop QA that is the one I applied for last year. It has been a great experience for me and I want to wish good luck to everyone who submitted the application.

I hope you’ll have a wonderful and productive summer 🙂

April 11, 2014 02:14 PM

March 11, 2014

Michael Sullivan

Inline threading for TraceMonkey slides

Here are the slides from my end of summer brown-bag presentation:

March 11, 2014 04:26 PM

September 20, 2013

Tiziana Sellitto

OPW at the end…

After three months has arrived the end of my internship period in Mozilla for the OPW. I’ve outlined in my mind a better idea of what is Mozilla and open source. I’ve experienced this movement of people that is really passionate about what it does; a new working way, an open and smart world. I’m really thankful for this opportunity.
This internship has meant a lot to me from both professional and personal point of view. I’ve had a great chance for improving my professional skills working with a large open source project and I’ve learnt or try to 🙂  :

I’ve started working with Bugmasters community, talking, helping triagers, developers and community members and I want to thanks my Mozilla mentor Liz Henry and everyone has made this possible for the help, the support and the work done.
During this period:

it’s the end of the summer but it’s not and and at all! 🙂 In the future I’ll keep on helping and profuse my efforts and my time to help Mozilla community and live open source everyday! 🙂
See you all around the web!

September 20, 2013 11:50 PM

September 03, 2013

Tiziana Sellitto

Tips and Tricks

Attaching “&debug=1” to a query like this Untriaged Firefox bugs shows the SQL version of the query (sql version) with the execution time of those. It is a useful way to better understand  the query especially if you are trying to understand if it reflects what you intend to do.

September 03, 2013 09:21 PM

August 28, 2013

Tiziana Sellitto

One month to go…

One month and my OPW experience will finish. I was reading a blog post of Gabriela Salvador Thumé, an OPW intern involved in a Mozilla Crash Stats project named Socorro.

This week closed a cycle of 2 months in this amazing OPW experience. I am so glad to be part of an incredible team like Socorro. I was reflecting about our hopes and expectations, sometimes we feel that we don’t have to dream huge because the risk of the dream come true is little, but if we always dream at the lower limit, we never are going to experience the happiness of doing something that really challenges you. I know that doing challenging things all the time can be frustrating, but the gratification is so much higher than the fear of do not getting whatever you want.
I am digressing into this because I have just one month till the end of the OPW and I am enjoying so much that I don’t want it to end. But I am sure that because of this awesome experience I am rethinking a lot of thoughts that I have about myself, like my capability of doing what I really want (maybe sometimes I feel a little about getting into the impostor syndrome).

She was reflecting about this internship and I’ve started thinking about these two months that have passed; about all the expectations, feelings, experiences…
It seems to me to have lived two months of challenges, professional and personal challenges. It’s like when you go to a game shop and you see a shelf full of puzzles. You decide to buy a puzzle game because you like it, you feel that you can finish it and that it will fits good in your home.  Then, each day you look at the puzzle and you add a new piece.
The same way each day of this internship has been a new challenge, a new peace to add to complete the puzzle. Sometimes you find the pieces that perfectly match to the others and you feel like you are doing well but, there are days when you discover that a piece is not in the right place and you need to remove it and start working on it again.

This internship is spurring me on keep on working, it is spurring me on be confronted with others, to let the fears go away, not to be too much worried about mistakes that can always happen, and to hardly work to reach my goals and improve myself. You could sometimes feel like have chosen a puzzle too great for your abilities but maybe you should only need more time, or more studies, more efforts and I’m sure it will be ok…you’ll finish the puzzle!

Every new experience is a challenge, you’ll never feel like everything is clear at the beginning but the gratification at the end will be the reward!


August 28, 2013 07:42 PM

August 19, 2013

Mike Hordecki

Add-on debugging in Firefox

A sneak peek at the upcoming add-on debugger in Firefox.

August 19, 2013 11:52 PM

August 09, 2013

Tiziana Sellitto

[good first bug] FIXED

One of the projects I’m working on during my OPW internship is triaging mentored and good first bugs. [good first bug] is the tag used in the whiteboard in to signal that a bug could be a good first step to work on for newcomers to Mozilla. A [mentor=x] tag is also added in the whiteboard if that bug has a mentor assigned to it who can help developers working on it.

Triaging good first bugs and mentored bugs means running through those and making sure that they are current and still valid (Most of that bugs are a little stale but valid, other are not valid anymore). Another goal is unassigning inactive bugs and let them free for a new person to work on it. One of the bug days organized by Bugmasters is that of Mentored Bugs. Even if there is no active Bug Day it is possible to work on that in any moment. During this activity I’ve noticed this one bug 854952 and I’ve decided to try to work on it :).

After the bug has been assigned to me

My patch has been approved and added to mozilla-central repository and it will be available in Firefox 25 even if it is still visible from the Nightly 26.0a1 (2013-08-06). It involves changes in the fullscreen permission prompt and the changes have been required css and XUL knowledge other than a minimal understanding of Mercurial and Javascript. I’ve worked on Mac OS but during the development process I’ve tested it on Windows, too. So I’ve learned how to build Firefox on other platform,too. (and with my Windows machine with only 4 GM RAM it’s not so quick 😉 )
This is the result of my work 🙂

Fullscreen Permission Prompt

Current Fullscreen Permission Prompt

While this was the previous prompt.

Previuous Fullscrren Permission Prompt

I’ve changed text and buttons’ order on UNIX systems, added a background image and halved the border-radius to make the prompt prettier 🙂 . I need to thanks my mentor Jared Wein and all the developers’ community, that can be reached on #introduction channel of, if my good first bug has been RESOLVED FIXED in a short period of time and now I’ve learned a little bit more 🙂

August 09, 2013 04:55 PM

August 07, 2013

Dhaval Giani

Transparent Decompression

If you have used Firefox on Android, you will be quite impressed to know it loads compressed code on demand into memory. In order to do this however a custom linker was required, which would handle a SIGSEGV and then retrieve compressed data from disk and then decompress it in the area reserved for it.

For the past few days, I have been working on getting the filesystem to do that for me transparently. What this means is that now we just SIGSEGV, and load uncompressed data directly. No messing around in userspace and complex error handling. I am almost ready to post v1 of the patchset.

The first step was to pick a compression scheme, which would lend itself easily to seeking and the other good properties we all like. Since one of our primary targets is firefox, we decided to use the same compression scheme as that. However, the implementation needs to not be tied to a compression type.

The seekable zip format is just zlib chunked into 16k chunks, each compressed individually (using zlib), and finally a custom header which has all of this information stored. There is a tool available to generate these here. Just build and use the szip tool to make compressed files.

The big challenge was how to modify the filesystem to handle these cases. For v1, complexity is evil. I have just modified the read path, where we do nothing different with an uncompressed file (except add another level of indirection). The fun starts once we know a file is compressed. First we check if we already have the header information. If not, retrieve it from disk and initialize it. Once it is setup, we then correct the offsets to point to the right chunk, retrieve the data from disk, uncompress and then copy into the userspace buffers. Userspace is not aware anything happened underneath.

The implementation is still tied to the szip format in this version, but it is just a question of code refactoring and designing interfaces which would make it compression scheme agnostic.

The bigger issue imo is that we are too high in the stack. Ideally one would want to be decompressing soon after getting data off the disk. One possible way is to map compressed and uncompressed data to different pages and expose only the uncompressed data. I will tackle these issues in the next version.

August 07, 2013 05:46 PM

July 21, 2013

Tiziana Sellitto

Weekly Update.

Here we’s passed almost a month from the beginning of my OPW internship and this week we have had our first OPW meeting on GNOME #opw channel on IRC. We have had the chance to introduce ourself to the community and to the other women. Each of us has reported the efforts, the goals and the work made until now :). It has been a useful way to connect each other and write about our experience. I wanna really thanks all the moderators that are making this possible 🙂

Now coming back to my weekly work…

QA Activity

This weeks I’ve worked on bug triaging, I took part at

Here there is a report of my activities listing the triage work done during last week.

Developer Activity

This week I’ve implemented some queries in my Bugmasters Extension based on the projects listed in the Bugmasters Project Page. I’ve added some queries with the number of bugs  for each one. Some screenshots of the addon is here built until now. The select drop down menu contains other useful option to search for specific bugs based on the mail user

Select options

Bugmaster Addon

July 21, 2013 07:27 PM

July 19, 2013


Day 1 at Mozilla

Hi Everyone,

It has been already 6 weeks of my internship at Mozilla. I thought I should start writing my blog now. It’s better late than never.

I started this intern on June 3rd, 2013. I woke up early and reached to Mountain View office around 8:00AM. There were many interns joining on the same day. HR people gave some information and completed paper work. Andrea gave us bag with Mozilla logo, head phone and socks too. Then she drove us to 10forward where most of the Mozillians were attending “All-Hands Meeting”. All interns were waiting for their mentors. After few minutes, my mentor Anthony Hughes came and pronounced my name. Though his pronunciation was wrong, I understood it. It didn’t surprise me because I already expected it. My name is too long and not easy to pronounce unless someone is originally from India.

Finally, after all formalities, I was with Anthony. We attended remaining meeting and came to a place where automation team use to seat. Anthony didn’t know about my desk, so he talked to Marc (my manager) and made one temporary arrangement which became permanent. Other interns already had desks assigned, but after some days I found that one desk was already assigned to me but I am happy with my current Desk.

Anthony gave me information regarding projects and my tasks. He also provided online documentation. I was so nervous in the morning and was worrying about so many things but after spending few hours with Anthony, I became calm and confident.

On a final note, I look forward to work with Anthony and my entire team. I am really very excited. Stay tuned for updates on next day/week.

July 19, 2013 05:25 AM

July 07, 2013

Tiziana Sellitto

Weekly Update

My OPW internship with Mozilla is going on. I’ve started my weekly meetings with my mentor, and I  hope I’ll improve my English communication skills, as well. 😀

QA Activity

This weeks I’ve worked on bug triaging, I took part at the

I’ve also decided to learn more about Bugzilla and bug’s lifecycle. As for what I think the first important thing to do when approaching Bugzilla and Triaging and, goal of my  work as bug wrangler as well, is understand:

  1. the tool you are using and how you can improve other to better and quickly learn it;I’ve started learning more about Bugzilla, reading docs, wiki, asking for info on IRC :). Under the idea of my mentor I’ve begun by sketching some ideas for One bugzilla per Child project, a project to set up training missions for new users. I’ve started exploring technologies and contents to add for some training missions about basic bugzilla use, and bug triage.
  2. how you can improve triaging work and let other approach to do the same. I’ve started exploring bugzilla query searches, reporters and developers activities; I’ve looked at a lot of bugs and read comments, whiteboard flags, looked at bug’s reporting dates, bug’s modification dates, time within which reporters are available to answer and add more info to the bugs, and I’ve estimated that there is a range of bugs, Middle-aged Bugs, that haven’t been changed recently, but were reported within the last 6 months. Triaging them may find good, relevant bugs. It can also help clear out invalid bugs! During these weeks I’ve focused my attentions on these bugs, starting from an initial set of bugs around 405, and I’ve started triaging them. I’ve also worked on how to generalize some simple steps to follow for triaging these bugs and help other approaching this Bugmaster Project and the result is here. After two weeks the number of bugs is decreased to 380 ( some of them are now resolved as INVALID,INCOMPLETE or WORKSFORME, other bugs are now NEW and a lot of others bugs are still waiting for replay from the reporter of needs-replication).

Here there is a report of my activities listing the triage work done during last week.

Developer Activity

This week I’ve started implementing some initial queries for my Bugmater extension:

July 07, 2013 06:19 PM

June 30, 2013

Tiziana Sellitto

Where it has begun…quality assurance testing

Everyone who has surfed Mozilla website perhaps has seen a link that says: Get Involved. This is the simplest way to start helping, to get all the info and the opportunity to find out what you want to help with. This has been my starting point,too. I choose Quality Assurance and, in particular, Desktop Firefox Team. This team is focused on desktop Firefox testing for upcoming major releases and maintenance releases.

Anyway, the first steps to follow starting with Quality Assurance are:

Some of the first activity to do are :

More info are available in the doc section Here there are all the info about Events, TestDays, Bugzilla…every thing can help you starting.

June 30, 2013 05:39 PM

June 25, 2013

Tiziana Sellitto

First Week in OPW

Last week I’ve started my internship for the OPW with Mozilla. My main goal for the internship is to work as Bug Wrangler; What I need to do is helping the Bugmasters community with bug management and triaging under the supervision of Liz Henry, my mentor. Another of my goals is developing a Firefox Addon addressed to QA people, an instrument for helping users of Bugzilla and Mozilla through simple and immediate searches.

QA Activity

This week I’ve focused on bug triaging, I took part at the

Here there is a report of my activities listing the triage work done, a new filed bug after a crash report, new mentors assigned to old [good first bugs].

Developer Activity

This week I’ve started creating a small sets of functional requirements for my extension. It’s an integration between Bugzilla and Mozilla and so I’ve started exploring

During all the week I’ve been on IRC trying to help during and off the testdays to help people approaching QA activities, hoping to be of help for other people like when I started and this work makes me very proud. 🙂

June 25, 2013 01:27 AM

June 21, 2013


Week 2 had ended last week and week 3 will end soon

Hi Everyone,

Samvedana Gohil

June 21, 2013 05:30 PM

June 19, 2013

Dhaval Giani

Using volatile ranges

Now that we have a kernel which can do volatile ranges, the next question is how to actually use this feature.


The patchset provides a system call interface. So one could probably make vrange.h

#include <sys/syscall.h>
#include <sys/types.h>
#include <unistd.h>

#define SYS_vrange 314

static int vrange(unsigned long start, size_t length, int mode, int *purged)
return syscall(SYS_vrange, start, length, mode, purged);

int mvolatile(void *addr, size_t length)
return vrange((long) addr, length, VRANGE_VOLATILE, 0);

int mnovolatile(void *addr, size_t length, int *purged)
return vrange((long)addr, length, VRANGE_NOVOLATILE, purged);

Now that we have some code which can be used, let’s dissect this code fragment. Since we don’t have a vrange syscall in glibc yet, we have to make a vrange function which will call the syscall in a vrange enabled kernel. This system call takes 4 arguments. The first argument is the address where the volatile range starts. The next argument is the length that should be marked. The third argument is the key. It lets the kernel know whether to mark that range as VOLATILE or NOVOLATILE. The final argument is relevant only when we try to mark a range as NOVOLATILE. It lets the user know if a range had been purged while it was marked volatile.

When a range is marked a volatile, it enters what I like calling the schrodinger state. A user doesn’t know whether it is there or not, till it either marks it as non volatile or tries to access it. This leads to two ways of using this feature. One as we discussed the last time was to mark it as non volatile before use, and fixing up data if it had been purged. The second approach, which was suggested by Taras talks about sending a signal when accessing data which has been purged. Within the signal handler then one would mark it as non volatile, fix it up and let the application continue.

In order to get an idea about how to use these methods, take a look at the test cases in the git tree.

June 19, 2013 04:50 PM

June 18, 2013

Dhaval Giani

Volatile Ranges

John Stultz and Minchan Kim have been working on the volatile ranges feature in the Linux Kernel for sometime. One of my first tasks is to help them, so that the feature goes into the kernel.

This feature is very useful, especially in memory constrained devices. But this feature is also very useful on desktops as well. One use case that I think is really cool is that of Firefox image cache. This is how Taras explained it to me. Imagine you have a lot of tabs open, with sports illustrated open in each one of them. Everytime we switch tabs, all the decompressed + decoded images in the tab being switched from are dropped, while we decode and draw the images in the new tab. If we don’t do that memory use will just skyrocket.

With the use of volatile ranges, Firefox could consume less memory with the ability to safely use more memory when available. The kernel knows that it can get rid of the image cache if there is memory pressure, and in any other case it will let Firefox retain the caches and that should translate to faster tab switching.

The code might now look something like this,

switch_tab(prev, next) {

clean_up(tab) {
vrange(tab_image_data, size, VOLATILE, NULL);

setup(tab) {
int ret;
vrange(tab_image_data, size, NON_VOLATILE, &ret);
if (ret == SUCCESS)

But now we have the additional overhead of a system call. Not everyone is happy with that, which brings us to another possible way of using volatile ranges. When a page is in schrodinger’s state (it might or it might not exist), if you try to access it, and it doesn’t exist, instead of faulting, instead you get a SIGBUS. Conceivably one could have a signal handler which will mark stuff as non-volatile, fix it up and then let the world continue on. This is a tricky feature to use, but it is very powerful. However that is a topic for another blog post

How do I help out?

In case you wish to help, here are the steps to follow

1. Build the kernel. The version that I am using at time of writing is at available here

2. Get the test cases. I have a git repository here

These test cases assume you have memcg mounted at /sys/fs/cgroup/memory/

Once you have the vranges kernel installed and booted, and the test cases built, run which at some point in time will ask for sudo from you, and run all the test cases.

Let me know how it went.

June 18, 2013 11:49 PM

June 10, 2013

Tiziana Sellitto

OPW…what is this?

I’ve started this blog talking abut the OPW, Mozilla and the internship I’ll begin soon. But…OPW…What is this?
OPW Poster
OPW stands for Outreach Program For Women and, as the name may imply, it’s a program aimed to provide tech internships to women organized by GNOME Foundation and inspired by Google Summer of Code. This program aims to encourage women to take part in FOSS projects guiding them through their first contributions. It first started with a round in 2006 with the GNOME Foundation, and then resumed in 2010 with editions organized every half a year. Now many other FOSS organizations have joined the program.

Free and Open Source Software (FOSS) is software that gives the user the freedom to use, copy, study, change, and improve it. There are many Free and Open Source Software licenses under which software can be released with these freedoms. FOSS contributors believe that this is the best way to develop software because it benefits society, creates a fun collaborative community around a project, and allows anyone to make innovative changes that reach many people. FOSS contributors do various things: software development, system administration, user interface design, graphic design, documentation, community management, marketing, identifying issues and reporting bugs, helping users, event organization, and translations.[source]

Among the participating organizazions of this round there is Mozilla and this is the organization I’ve applied for.
I’ve always thought that FOSS projects were something for an elite of people: a closed world, difficult to approach. It’s nice to see that OPW has already proved me wrong: I come closer and closer to a community of triagers, developers and users spread throughout the world. I can be part of something, and this will certainly make me a better person on both the personal and professional side.

This world needs human beings, all gender of human beings not only men 🙂

June 10, 2013 06:47 PM

April 15, 2013

Vikash Agrawal

Google Summer of Code (GSoC 2013)

hi Everyone,

GSoC 2013 has started. So I would like to wish everyone best of luck if you are participating this year.

I was a past GSoC participant under Mozilla MDN. I was fortunate to get great mentorship under teoli!

Mozilla is participating this year under GSoC and you can find the list of ideas here.

This year MDN has two proposed projects and its my pleasure to see teoli is mentoring again 🙂

I see a lot of participants are enthusiastic about the idea on “MDN CSS Generation Tools” which IMHO is a great project and fairly a moderate one :). I would suggest you to talk to teoli over this and discuss and show code. Its good to remember

“Talk is Cheap, Show me the code” – Linus Torlvalds.

Consistency, hard-work and good-code and a great proposal is all you need to get into GSoC.

Many people have already contacted me and if you wish to do the same please feel free to ping me on IRC, vikash (my nick) or email me at and I wouldlove to help you 🙂

Dont forget my beer if you get selected 🙂



Tagged: #2013, #gsoc, #mdn, #mozilla

April 15, 2013 08:01 AM

November 20, 2012

Jonathan Wilde

Real-Time App Deployment with Dropbox

This is a rather belated post about my 2012 summer internship at Mozilla. It represents the state of affairs of Firefox Metro in August 2012; things have changed since then.

Windows 8 is ambitious. Most operating systems support an imprecise input method (touch) or a precise input method (mouse or pen). Windows 8 supports both, across tablets, laptops, and desktops.

Firefox Metro supports the same spread as Windows 8 itself. So, as a Mozilla summer intern developing its front-end code, I needed to regularly test my work on a tablet in addition to my development desktop.

I wanted Metro Firefox on the tablet to run untethered from the desktop. That way I’d be able to bring it up and demo it easily for people.

This meant maintaining separate Mercurial working copies and running builds on each device. Other team members depend on Elm, Metro Firefox’s project repository, not breaking. My code changes weren’t polished or stable initially. Pushing them directly to Elm wasn’t an option.

Mercurial Queues is a popular way to deal with this. It encapsulates in-progress changes as portable patch files, which you can synchronize between working copies with another Mercurial repository.

But, it’s not a simple solution. Pushing a patch from desktop to tablet involves updating the patch file, committing and pushing it to, pulling Elm changes to the tablet, pulling my patch changes, reapplying the patch, building on the tablet, and restarting Firefox.

This is cumbersome. What if I all I had to do was build on the desktop, wait a bit, and restart Firefox on the tablet?

Enter, Dropbox

I figured I could accomplish this by using Dropbox to synchronize my working copy.

However, a typical Firefox working copy weighs in at several gigabytes, most of which is unnecessary for testing out a pre-built copy of Firefox. To save time and space in my Dropbox, it made more sense to only sync obj/dist, the core files needed to run Firefox.

Windows doesn’t seem to provide file change events about what’s happening behind a symlink, so I couldn’t just symlink obj/dist from Dropbox to my working copy. Instead, I had to copy obj/dist into Dropbox, and then link obj/dist symlink from my working copy to obj/dist in Dropbox.

After getting sync working, I needed to sort out some remaining Windows Registry and permissions issues.

To be a Metro style enabled desktop browser and have a proper tile on the Start Screen, there need to be Windows Registry entries pointing to the Firefox executable. There already was a makefile in Elm to generate registry entries pointing to a Firefox executable in obj/dist for the current working copy. Since the path of my Dropbox folder on the tablet differed from the location of my working copy, I had to tweak the paths in the generated registration scripts.

But, event after getting registered, Metro Firefox wouldn’t launch. The Windows Event Log contained to permissions errors.

To keep file paths short on the command line, I put Dropbox folder in the root of the tablet’s C:\ drive. Since Windows doesn’t automatically give files in the drive root the required execution permissions, Metro Firefox wouldn’t launch.

Firefox started working after I moved the Dropbox folder to my user directory on the tablet.


A few weeks later, Yuan from UX asked:

15:23:55 - yuan: is there any other way for you to share your front-end work? jwilde

We couldn’t yet generate an installer for Metro Firefox because some key build system changes hadn’t landed.

These have since landed and power the Firefox Metro Preview.

So, I decided turn my Dropbox builds into a shared folder for Yuan to use. I automated the setup process with some Python scripts that generated the appropriate Windows Registry files based the user’s Dropbox folder location and documented the installation process publicly on Etherpad. It was janky, but usable as a temporary solution for bleeding-edge testers.

A bit later, she had a working copy of Metro Firefox that would receive updates in real-time.

Yuan posted a link to the Etherpad on the Mozilla Wiki so that other people could try out the build. Given the roughness and instability of the builds in the Dropbox, I chose to keep the Dropbox private but give out access to anybody who asked. By the end of the summer, 13 people were actively testing the Dropbox build and providing incredibly useful feedback.

As more people joined the folder, I felt a responsibility to do a better job at keeping builds working than I did when it was just me and my tablet.

Directly real-time syncing obj/dist meant that shared folder members could see all changes, both good and ugly. To reduce the chance users would see ugly changes, I got rid of the symlinks to the Dropbox folder. I started compiling into a staging folder and using a Bash script to copy obj/dist into Dropbox once I’d checked that everything looked good.

This cleaned things up some, but with ~13 people in the folder, there was a decent chance that at least one person would have Firefox open, putting file locks across their copy of Firefox. When Dropbox receives changes a locked file, it tags the file “in conflict” and uploads the alternate version. Because all members of private shared folders have write access, there were duplicate files strewn across the Dropbox folder, leading to clutter.

Despite the issues that I ran into, I think that Dropbox is a workable development deployment solution for small number of devices when products like Adobe Shadow or Visual Studio Remote Debugging aren’t workable. A public, read-only shared folder might have handled the scaling issues better, but I’d stick to proper updater systems for when apps grow beyond one or two testers.

November 20, 2012 12:00 AM

August 21, 2012


Networking Dashboard v01

Networking Dashboard is a tool that shows internals of networking of Firefox. It's an add-on that I have made as part of GSoC project and can be used by typing 'about:networking-dashboard' in URL bar after installing this add-on. It shows current open connections with host names, how many of them are active/idle, if they are using SPDY or not, if they are using SSL or not, DNS cache entries, total data sent and received, data related to WebSocket connections etc. I have blogged about it in detail in my previous post.
Following are some screen shots which might give you some idea about the tool.
Timeline shows graphical view of total data sent and received after starting the add-on.
One feature that I couldn't implement as part of add-on is header window which shows sent and received headers of the packets Http connections.

I have used jQuery and flot - a pure Javascript plotting library for jQuery -  for building the add-on. I can make the add-on available but it won't work because back end part of the code hasn't been migrated to mozilla-central yet.

Http Connections

Socket Connections

DNS cache entries

WebSocket connections

Timeline showing graph of total data sent and received

August 21, 2012 02:34 AM

August 06, 2012

Vikash Agrawal


hi Friends,

GSoC is not just about code but is also about time management, punctuality and dead lines [And trust me this is the difficult part of it]

It gives me immense pleasure to see that I have met all my deadlines (either set by me or teoli well in time). It is one of those moments when you just want to relax and have a sip of tea and say “Yes, I nailed it bitch”. It is also one of those moments when you feel you couldnt have achieved this without the guidance from your mentor/teacher.

My next hurdle is “Final Evaluation” so figers-crossed 🙂

Tagged: #gsoc, #gsoc-mozilla, #mozilla, #mozilla-gsoc

August 06, 2012 01:59 PM

July 28, 2012

Vikash Agrawal

Writing after a very long time

I think, its been ages I am writing a blog, so I will take this in order and now onwards I will try to be more accurate 🙂

So, Yes I successfully cleared my mid-term evaluation. Yay!

After that, I had been working for a while on the project and its going great.

Now the worst part is, for about a week I couldn’t write a single line of code. Unfortunately I had been severely ill these days and had been completely bed ridden. I did informed teoli on this and he is certainly aware of my situation. Moreover these days I was absolutely cut-off from the outside world too 😥

So, once I recover (which I hope) within next three-four days, I would be back on my project. 

I think it wont take more than a week to meet my GSoC phase1 and phase2 targets. Also, once I meet those, i think the technical parts will come to an end and I would indeed be a confirmed GSoC’er 😉 🙂


Tagged: #gsoc, #gsoc-mozilla, #mozilla, #mozilla-gsoc

July 28, 2012 03:38 AM

July 12, 2012


Update of GSoC Till Midterm Evaluation

I have been working on Networking Dashboard for quite a while now and I think the back end part of the project is nearly done. It involves porting out information related to connections of different protocols(HTTP,WebSocket,etc.) through different interfaces.
This blog is an update on how much has been done so far and what am I going to get done in future.

What has been done so far?
So far I have implemented four interfaces. Every interface returns a JSON string containing different information in different fields.

1) nsIHttpConnectionLog.idl :- This interface returns following information related to current HTTP connections in form of a JSON string,

host : Host name of the HTTP connection
- port : port no. of the connection
- spdy : If connection using spdy or not
- active : rtt(Round Trip Time) and ttl(Time To Live) of active connections in respective fields
- idle : rtt and ttl of idle connections in respective fields

The connection data has been collected from nsHttpConnectionMgr.h where data related to http connection is maintained in form of a hash table.
Following is the example of a JSON string which is returned from this interface after opening followed by
"host" : ["","","","","","localhost","","","","","","","","",""],
"port" : [443,80,443,80,80,80,443,80,80,443,443,80,80,80,80],
"active" : [{
"rtt" : [0],
"ttl" : [145]
"rtt" : [],
"ttl" : []
"rtt" : [0],
"ttl" : [143]
"rtt" : [],
"ttl" : []
"rtt" : [],
"ttl" : []
"rtt" : [],
"ttl" : []
"rtt" : [0],
"ttl" : [144]
"rtt" : [],
"ttl" : []
"rtt" : [],
"ttl" : []
"rtt" : [0],
"ttl" : [144]
"rtt" : [0],
"ttl" : [145]
"rtt" : [],
"ttl" : []
"rtt" : [],
"ttl" : []
"rtt" : [],
"ttl" : []
"rtt" : [],
"ttl" : []
"idle" : [{
"rtt" : [],
"ttl" : []
"rtt" : [0],
"ttl" : [95]
"rtt" : [],
"ttl" : []
"rtt" : [0],
"ttl" : [86]
"rtt" : [0],
"ttl" : [79]
"rtt" : [],
"ttl" : []
"rtt" : [],
"ttl" : []
"rtt" : [0,0,0,0],
"ttl" : [89,86,85,85]
"rtt" : [],
"ttl" : []
"rtt" : [],
"ttl" : []
"rtt" : [],
"ttl" : []
"rtt" : [0],
"ttl" : [75]
"rtt" : [0],
"ttl" : [86]
"rtt" : [0],
"ttl" : [85]
"rtt" : [0],
"ttl" : [75]
"spdy" : [true,false,true,false,false,false,true,false,false,true,true,false,false,false,false]
Empty ttl and rtt array means that that there is no connection with the host.
You can have a better look at it using any online JSON editor. Sorry for posting such long JSON. It looks ugly if I don't format it this way.

2) nsIWebSocketConnectionLog.idl :- This interface returns following information related to current WebSocket connections in form of a JSON string,

- hostport : Host name and port of the WebSocket connection
- msgsent : No. of messages sent
- msgreceived : No. of messages received
- encrypted : If the connection secure or not

There is no maintained data available for WebSocket. So all the data related to the connections is maintained at the implementation of the interface(in file nsWebSocketConnectionLog.h) Every time a new connection is made with new host or an old connection is closed, an entry is made or removed from the database if an instance of the interface has been created.

I found websites using WebSocket from
Here is one example of JSON string returned from the interface after opening and wandering around a little bit and after that :

"hostport" : ["",""],
"msgsent" : [76,8],
"msgreceived" : [99,223],
"encrypted" : [false,true]
3) nsISocketLog.idlThis interface returns following information related to current Socket connections in form of a JSON string,

- host : peer host ip
- port : peer host port
- tcp : if the connection is TCP or not

Data related to socket is collected from nsSocketTransportService2.h.
The data returned after opening and
in above example is:
"host" : ["","","","","","","","","","","","","","","",""],
"port" : [20480,20480,20480,20480,20480,20480,20480,20480,47873,47873,47873,47873,47873,20480,20480,20480],
"tcp" : [true,true,true,true,true,true,true,true,false,false,false,false,false,true,true,true]
This data is not much I am hoping to add some more fields(like total data sent and received) to this interface.
4) nsDNSEntries.idl - This interface returns cached entries in DNS cache with  
    following fields,

- hostname : name of the host
- hostaddr : ip address of the host
- family : ip address family(ipv4 or ipv6)
- expiration : time when the cache entry expires(in minutes)

Well, this interface is near completion(believe me,it is). One little difficulty is stopping me from posting it's output here. May be I post it in the comment after it's done. It's going to contain above fields.

This concludes what I have completed so far.
I have been maintaining a mozilla-central repository at and committing to my updates to branch 'Networking Dashboard'. You can have a look at it if you want. To test these interfaces, i have made a temporary addon using jetpack without any GUI.

What am I going to do next?
First of all I am going to finish the DNS interface.
Since basic interfaces have been implemented, I can now start working on GUI part of the addon. On the way of doing that, I might need to make differences to the interfaces or add some new data.
I also want to include time stamps, size of data sent/received to above interfaces of different protocols. These two feature are very important to the addon. I will work on them after I am done giving a face to the 'Networking Dashboard' addon.

This was an overview of my GSoC work so far. I have learnt a lot of new things about networking and Mozilla internals throughout this period. Thanks for reading. I hope you download my addon in future :)

July 12, 2012 05:47 PM

June 26, 2012

Vikash Agrawal

~2 Weeks from Now~

hi Everyone,

These day I have been rafactoring and updating my codes as per teoli’s review. but this was until yesterday.
For the nom-Mozilla or MDN related guys, MDN is shifting to Kuma wiki from the old wiki system, and after 2 weeks from now I have my firest mid-term evaluation.
I personally felt that I have been going slow from few days, and this was one of the points on which both me and my mentor agreed. So we have decided that instead of cleaning the code, I will do border properties. So that means, before the mid term I am very clear and brainstormed upon what has to be done. Things are crystal clear in my mind. Today I will go through the spec. Once I jot down the major points of the spec, I will then move towards the fun part. It is also decided that I would make separate files for each property. Also, to keep it simple, we have decided that I would first do for top, then left, right, bottom and then finnaly the border-* . I think this approach is very good, as I would have smaller goals and these are easier to acheive.

Other than this, I am very sleepy. I do have a terrible shoulder ache, and dont know why am I getting cramps in my stomach :'(. But the bad part is I will have time for the doctor only after 9th July. Till then I cant afford to.

So, mid terms, here I come and give me some good news


Tagged: #gsoc, #gsoc-india, #gsoc-mozilla, #mdn, #mid-term, #mozilla

June 26, 2012 05:18 AM

June 21, 2012

Vikash Agrawal

Work is there and time is not

Ah! one more week has come to an end.

This week was full of ups and downs. I *strongly* beleive I have paced down and is going slow in my work. But there is an entire story behind it. So to make it simple I will use bulleted lists

The story doesn’t end here. For all these things I had to travel atleast 2 hours in metro and the summer weather had killed me.

With all this, on Monday I had an very important meeting which was again 2 hours away. So it is very true I am exhausted like hell with all these travelling and almost twice in a day.

With all these, I really tried to be on track but some how I missed to put in a lot of my energy. And its true, when you are in a bad feeling and you know you have to do it by hook or crook you do it, I am at the very similar situation now. I know I am lagging in work like hell so I am going as fast I can and working as much as I can. My work includes reading of specs and trust me it is sometimes impossible for me to find the real and updated one.

BUUUUTTT, with all these the *best* part is, I was very scared to confront it infront of my mentor but trust me my mentor, Teoli is super awesome. he understand everything and motivates me for more. His guidance indeeds plays a very important role in my progress. I am sure I couldn’t have acheived my deadlines without his constant support and help. I am eagerly waiting to meet him in person, *one day*. I remember I was trying to draw a patter using linear gradients but was unable to and it was for days. he brainstormed me and within the next 5 min I got it running in all the modern browsers. The story would be incomplete if I dont mention his efforts but if I do I am sure you will take a day to read the entire thing.

Signing off



Tagged: #, #gsoc, #gsoc-mozilla, #mentor, #mozilla, #superawesome, 3mozilla

June 21, 2012 12:02 PM

June 16, 2012


Update on GSoC (May 21 - May 13)

I have ported out connection data related to protocols like HTTP,WebSocket and also related to sockets to javascript.

Now I data related to these protocols can be accessed through new interfaces.
Data related to different protocol in particular are -

HTTP -            Host name,  Port no., No. of active connections, No. of idle
                        connections, if connection is using SPDY or not (In future - socket
                        identifiers for particular connections, RTT for connections, TTL etc.)
WebSocket -   Host name, Port no. (In future - No of messages sent so far)

Socket -          Socket identifier and packets sent or received via that socket can be 
                        seen. (Not perfect right now.)

We can search for different HTTP connections for their packets in Socket data via identifiers.
Next, I might add a new interface over all of these interface which combines all of these data and returns a single JSON string. We can also do some calculations using these data.

June 16, 2012 01:02 PM

June 14, 2012

Vikash Agrawal

Week 2 had ended last week and week 3 will end soon

hi Everyone,

I am back, and my mentor too. Ok, So the story of my work begins.

I know I am a bit late on this but yes, I had been very busy these days. First things first, I have a done a good portion of work from my end AFAIK. Teoli has started coming over to IRC and reviewing codes.

He has suggested some for additions and removals though he also appreciated some of my examples too.

During these days, I submitted a small patch to The project is now dead ;'( and my patch was a test patch. Though the good feeling was, it was accepted through Paul Irish. Also, once I get free I will push some patches there too.

Also, in the mean time I had a dog bite, my phone’s screen had cracked and I am still going through dozes of vaccination.

Now, commenting on some fun part. My girlfriend is going to a party and I wont be able to make it there because tomorrow We (as in GSoC’ers and GSoC alumns) are having a meetup at google Gurgaon Office. Being a GDG Manager from Manipal, this was my idea [my bragging right] and was put in front. And I am sure tomorrow it would be a great fun meeting all of them.

The story doesn’t end here. Sat and Sun I am participating in GDG Delhi event, HACK-Code for a ause. We are planning to make  an Android App using Phonegp+JQuerymobile to acheive something decent. We are using many API from And certainly there work is pretty awesome.

So, its time for me to get back to work and code something useful and helpful.

You can also follow me on Twitter or watch/fork my work at Github.


Vikash Agrawal

Tagged: #dataweave, #gsoc, #mozilla, #mozilla-gsoc

June 14, 2012 10:13 AM

May 31, 2012

Vikash Agrawal

Work Started

hi Everyone

I am sure you would be enjoying your work as much as I am.

This is my first blogpost after GSoC coding period has started. I started my work from 27th rather 21st (had my exams from 15th-25th and then travelled back to home with 30hours of train journey). I had already notified teoli (he is the *superstar* who guides me when I am not on track) and sheppy (the documentation overlord) on the same. But certainly I didn’t loose out on anything as I had worked before 15th to keep my work going.

So far my work is going at a good pace and I will increase my productivity in coming days too. Today is a very Good Day for me. I got an office space at TLabs with the help of Uttam Tripathi and Arpit Agarwal. So, i am working from 10:00 AM till now (almost 6:30PM) with the help of caffeine. I did check in on four square and this is my workstation . the best thing is, they have allowed me to come over and work on my GSoC project with No strings attached. Thank-You both of you.

I also met entrepreneurs from

Today the productivity was good, but I did face a problem in writing an example for scaleZ as couldn’t understand how to test it, but I am not disheartened as I am sure teoli will have some awesome suggestions for me. Infact on scaleZ, today I had a discussion on #CSS on Freenode. it was nice chatting there but no output came as many haven’t tested it. Also some folks dint know that scaleZ existed so I had to give them the link -> w3 recommendation

I will share some cool pics soon about my office and work :P. I saw some awesome posters here by bluegape. My favourite one is github poster

Till then you can fork / file bugs / pull request and learn and try out the project



Vikash Agrawal

Tagged: #bluegape, #css, #freenode, #github, #gsoc, #gsoc2012, #mdn, #mozilla

May 31, 2012 01:11 PM

May 26, 2012

David Tran


Ricky Yean

What does it mean for someone to “grow up” in your organization? What kind of values would the person possess? What would the person be really good at?

I started thinking about this after attending a board meeting for BASES (Business Association of Stanford Entrepreneurial Students) as an advisor. In that meeting, we elected Ruby Lee to serve as the new Co-President, which brought me all the way back to September ’09 when I first met Ruby. At the time, BASES did not have a way to get freshmen involved, and it suffered from a shortage of talent to step up and lead. In response, I created a “Freshmen Battalion” to suck in the freshmen as soon as they set foot on Stanford’s campus. We would then rotate them to different teams so they can see what they’re interested in. Ruby was part of this inaugural class of freshmen…

View original post 439 more words

May 26, 2012 01:49 AM

May 04, 2012

Vikash Agrawal

Mozilla Blog

hi Everyone,

Its 1:21 AM here and we are having some nice discussion on various topics at #devmo on IRC. And suddenly I come across an #awesome link, Thank-You sheppy for that one. 

It feels great to see my name on the blog.

Also, I am very grateful to teoli for believing in me.


Vikash Agrawal 🙂

Tagged: #awesome, #gsoc, #mdn, #mozilla

May 04, 2012 08:00 PM

April 24, 2012


GSoC 2012 - Networking Dashboard

My proposal for 'Networking Dashboard' for 'Mozilla' got selected for GSoC 2012 today :).
I have been a contributor to Mozilla for a long time now. Till now mostly it was bug fixes.
But now I will be working on something real for Mozilla. My project is to make a tool for developers which allows them to have an insight of Mozilla networking. Google chrome already has something similar - "chrome://net-internals". This tool will be implemented as an add-on. The link to my proposal is here. It has a description about what functionality will be implemented. I wish I have a link to the add-on soon. I might blog about my experience with necko and about my progress in future. Stay tuned.

April 24, 2012 07:49 AM

April 23, 2012

Vikash Agrawal

GSoC selection

hi Everyone,

Its 1.30 AM in the morning and certainly the start ever for me.

I made it through once again in GSoC and this time under Mozilla with some amazing mentors. Today I am damn happy

Congratulations to all those who made it through once again


Vikash Agrawal

Tagged: #fun, #gsoc, #mozilla

April 23, 2012 08:06 PM

GSoC result for Mozilla

GSoC result for Mozilla

My fate over the summer

Tagged: #gsoc, #mozilla

April 23, 2012 07:58 PM

April 04, 2012

Wei Zhou

Location based sex service – this is ridiculous.

OK before I go ahead and make any insightful comments. All of you need to know this popular mobile app: Mo Mo. (Chinese name, 陌陌). It could be directly translated as “strange strange”. And it’s a location based sex hookup software (tho it’s not explicitly explained on their website). 

Go ahead and download this app here

If you know Chinese well – go ahead and read this page, this explains the skills you’ll need to pick up girls. 

Everyone else thinks this app is gonna be huge someday and has the potential to beat up weixin (biggest voice instant chatting app developed by Tencent China) and weibo(Largest SNS micro-blogging site that gains facebook like attention in China, developed by Sina).

The reason behind this is quite scientific – it meets human being’s basic desires. 

And I just think this is soooo wrong. A digital products’s moral value should be considered into development. 

Compared to Path (A very poetic, private and life-enhancing experience), Mo Mo is so wrong in almost every way. 


April 04, 2012 11:08 PM

What does future digital businesses need?

What does future digital businesses need?

This world goes digital. It’s inevitable, intense and on-going. It is heavily relying on new trends of technology, which fosters to create two identities for each individual(a real self and a second digital self). As these two selves melt together, humanity, culture, nature and business are altered into one single important realm: DESIGN. Why?
The major difference between humans and other animals is that “we as human beings are constantly and reflectively changing and creating things and values”.  If we deconstruct human brain we would find the human brain is perfectly designed to align with our worldly visionary pattern: The left brain, the logic one, is constantly seeking efficiency. While the right brain, the emotional one, is relentlessly creating value. It’s not hard to take the assumption that the most valuable human behavior is creating innovative businesses that transforms needs and wants into meaningful realities. We believe a business model describes the rationale of how an organization creates, delivers and captures value. A designer’s job is to extend the boundaries of thought, to generate new options, and, ultimately, to create value for users. This requires the ability to imagine “that which does not exist.” We are convinced that the rolls and attitude of the design profession are prerequisites for success in the business model generation. We can boldly assume the future business practitioners needs both knowledge of business and design to create truly innovative and successful digital products.
SMALL and HUMAN. Information technology rapidly reshapes the world and renders it as a flat and mash-able space. Technology and resources become less and less important factors for business creation. IT starts to penetrate human nature and immerses into every single aspect of daily lives. We conclude that a potential successful business model needs a solid and careful observation of human wants and needs. If a small business model can solve a tiny problem in a set environment – it could be used as a independent pattern for successful value creation. It contains a timeless value that can be used directly or replicated in a similar environment. We then assume the user experience design associated with that pattern is positive and with a guaranteed business value in it. Venture capitalists love these small and human business ideas.
Everyone is entrepreneur. This concept is originated from a pure fact: we now even see elementary school students creating iPhone apps to bring in extra income. The income tunnel is not entirely related with age, profession or geo. A simple idea with limited capital and human resource could become a marketable business model and potentially bringing in revenue and value.
Value Co-Creation. “Everyone is entrepreneur” is both an concept and a fact. The integration of resource becomes a must. Interdependency is necessary.
In sum. I believe digital business needs three things: Small&human, everyone as entrepreneur and value co-creation.

April 04, 2012 03:50 AM

March 24, 2012

Ehren Metcalfe

The Dead

that region where dwell the vast hosts of the dead

Conservatively detecting unused functions in a large codebase requires more information than can be gleamed from a simplistic observation of compilation (using GCC or clang, for example). Functions called by ASM alone are one problem. Functions called only via dlsym are another. These are not insurmountable problems (ASM can be parsed; dlsym accepts a constant identifier).

In addition, mozilla-central contains a large amount of third party code. Removing unused functions in these libraries is at odds with merging upstream changes. There are also large numbers of functions used on one platform (or build setting) but not another. Propagating the appropriate #ifdefs is the solution here but it’s a large task.

I’d like to offer one approach with an eye towards identifying the easiest candidates for removal. This is a combination of some of my existing work, by now admittedly crusty, based on Dehydra and Callgraph, but with a new (too me) simple insight about how to group the members of the subgraph of unused functions: partition them by the “is called by or is a (dead) caller of” relation.

This has the effect that if a false positive is in the results, all functions declared unused because they’ve been transitively called by the false positive get grouped with that false positive.

The output of a tool employing the above is here (source here).

Some notes on the results

The largest partition consists of 541 nodes (warning: giant file). It is the result of a single false positive, legacy_Open, called only via PR_FindFunctionSymbol, a dlsym wrapper. I have attempted to mitigate these situations by colouring green those functions having __attribute__(visibility(“hidden”)) (visibility specified in the flags passed to GCC is also detected). The count of functions having this visibility, but not transitively called by a function that doesn’t, is listed in the ‘Transitively Hidden’ column. This info is useless here, however, because the function is exported using a linker map.

It would be interesting to determine if the visibility is always correctly applied; I seem to remember seeing warnings related to the attribute in the past. There may be technical reasons why internal C++ must be visible but, in any case, its visibility does not preclude its removal. Here’s proof. (Also, I calculate that only 17.4% of mozilla-central has hidden visibility).

Moving on, the next largest partition has 507 nodes. They are all in the pulled-in chromium sources however. Apparently, it is genuinely dead because the minimum ranked note, evrpc_register_rpc, is only used in a test case (via a macro).

xptcstubs’ PrepareAndDispatch is an example of a false positive due to ASM (graph here).

Anyhow, I will leave the reader to examine the rest which gets more interesting once you skip the mega partitions.

Potential improvements

The non-callgraph portion of the tool is really just a couple of hacky scripts. The same thing could be implemented against the DXR callgraph (it would need to determine the targets of function pointers in some regard).

On this topic of handling function pointers, note that the call graph of this tool uses the naive address-taken approach. However, it has been ‘optimized’ for the dead code problem by treating the taking of a function’s address inside the scope of a function as a normal call. This makes the call graph useless for practically anything else but I plan on adding an addressTaker/addressTakee table in the future.

In fact, such an addition would make it easier to implement a simple type-based alias analysis where functions noted to have their address taken at global or local scope are matched against pointers using their return type and the type of their parameters (they’d then be pruned from the address taken list). I’m not sure if there would be problems with this beyond unsafe casting.

(More precise attempts at function pointer may-alias analysis would first require an interprocedural control flow graph but that is the simplest step. I’m somewhat tempted to build an ICFG using a hackish framework where visiting a function body = recompiling a .ii file. Were one to go about this, doing some basic form of interprocedural constant propagation and seeing what unused blocks fall out might be more fruitful.)

In any case, I do have some numbers regarding how many more functions one could expect to declare dead by improving alias information. At one point I had made a change that affected the creation of SQL statements building the graph. By mistake I had left out the generation of SQL for all address taking, regardless of scope. This resulted in the tool flagging just over 50000 functions as dead. Given that it currently flags 11479 out of 187081 (6% — seems low), one might expect to find another 2300 (6% of 39000) or so (these numbers include declarations, definitions, gcc builtins and the like). Although come to think of it, that’s a dubious estimate because function scope address taking is already handled in an, albeit, imprecise manner and the number of functions whose address is taken at file scope is only 5949 (using the 6% figure, only 357 more would be found).

Finally, another area of unnecessary conservatism is that, as you’ll note in graphs such as this one, there is an edge going both ways from a method declared in a base class to its override in a derived class (for example, there is an edge from nsSVGAElement::GetLinkState() to Link::GetLinkState() because of a direct function call, but there are also two edges linking nsSVGAElement::GetLinkState() and nsIContent::GetLinkState() because of their inheritance relationship). The edge really only needs to go from the base to the derived though. Once this is changed, the tool will presumably flag instances where a method overrides a base class method but all calls go through the derived implementation.

March 24, 2012 06:48 PM

March 10, 2012

Brian Krausz

Joining Facebook

For those who don’t follow my company blog:

GazeHawk Team Joins Facebook

I start at Facebook Monday. As always I’m hoping I have the chance to blog more actively, though we’ll see how realistic that is.

To go along with the new job, I’ve moved my blog over from to, and made it look a little more professional. will be used to host my pet projects as they pop up. Feedback/suggestions on the redesign are certainly welcome!

Joining Facebook was originally published in Stories by Brian Krausz on Medium, where people are continuing the conversation by highlighting and responding to this story.

March 10, 2012 06:40 AM

February 16, 2011

Michael Sullivan

Doing whole tree analysis with Dehydra


In this post I discuss how I used Dehydra to do analysis of the entire Tracemonkey tree.


I talk briefly about some Tracemonkey things to motivate what I used Dehydra for, but a knowledge of Tracemonkey is not required to appreciate the main point of this post.

The problem

One of the optimizations in my Tracemonkey inline threading work (Bug 506182), which I will be posting about later, is “PC update elimination”. Doing this requires figuring out which functions out of a certain set can access the JavaScript virtual program counter (which is stored as a variable named “pc” in a class named “JSFrameRegs”). There are about 230 functions in this set, so doing it manually is impractical.

The solution

I used Dehydra to help solve this problem mechanically. Dehydra is a static analysis tool built on top of gcc. It allows the semantic information of a program to be queried with JavaScript scripts.

First steps

By providing a process_function() function in a Dehydra script, I can inspect the variables used and the functions called by every function. Determining whether the pc is used is as simple as seeing if any variable has the name “JSFrameRegs::pc”.

function process_function(f, statements) {
  for each (let v in iterate_vars(statements)) {
    if ( == "JSFrameRegs::pc") {

The catch

Unfortunately, this isn’t quite what we want. This will tell us which functions directly use the PC, but not which functions can indirectly use it through function calls. Figuring out that requires looking at all the functions called by a given function. This is not straightforward with Dehydra, as functions frequently call functions declared in other files. Dehydra is a gcc plugin and is driven by the normal build system. Thus, it works on a file-by-file basis. Normal builds output an object file for each source file and rely on the linker to stitch it all together. Likewise, to do a whole tree analysis we need to output per-file information and then link it together later.

Collating the data

I reworked my process_function() to determine both whether the PC is directly accessed and the set of functions called directly. This data is then printed with the dump() function (discussed below):

function process_function(f, statements) {
  usespc = false;
  calls = {};

  for each (let v in iterate_vars(statements)) {
    if ( == "JSFrameRegs::pc") {
      usespc = true;
    if (v.isFunction) {
      calls[] = true;
  dump(, usespc, calls);

The remaining question is how to structure our output. Outputting it as data declarations for some programming language seems like an easy way to do it. Since I am more comfortable with Python than JavaScript, I output the data as Python code:

function dump(name, usespc, calls) {
  s = '@@@';
  s += '\t"' + name + '": ({';
  for (let f in calls) {
    s += '"' + f + '", ';
  s += '}, ' + (usespc ? "True" : "False") + '),';

We create a dictionary mapping function names to tuples of the set containing the name of each function called and whether the PC is accessed directly. When doing a build, the output will be intermixed with output from other parts of the build infrastructure, so we tag all of the output lines with ‘@@@’ so that a post-processing shell script can recognize and extract the relevant data.

The relevant bit of the shell script (which is linked below), is:

(echo "callgraph = {";
cd "$DIR" && make -s CXX="$CXX" 2>&1 | grep ": @@@" | cut -d@ -f4-;
echo "}")

This data gathering method could easily be modified to analyze different problems. For simplicity, I extracted only the relevant information and converted it directly to Python data structures. Another approach would be to dump all of the function information (probably in JSON), allowing a later analysis script access to everything.

The analysis

I wrote an analysis script in Python to process the data. The input is a large dictionary, mapping functions to the set of functions they call and whether they touch PC directly.

I viewed the problem in terms of graph theory. The mapping from functions to the functions they call is simply a directed graph. Each function is a node and there is a edge from a function to each function that it calls. In this graph, some nodes (those that touch the PC) are initially marked. We want to color red every node that has a path to one of the initially marked nodes.

Another way to state this is that a node is colored red if and only if it is either initially marked or has an edge to a red node. Doing the coloring is a simple problem. We simply compute the reverse graph (reversing all of the edges) and then perform a depth first search of the reverse graph starting from the marked nodes. Every node we see, we color red.

When doing the DFS, we keep track of which node we first reached a newly colored node from, so that we can determine a path back to an initially marked node.

My implementation also supports providing a predicate to exclude certain nodes from the search, in order to investigate how fixing certain functions to not require the PC would change things.


While my method is useful, it is not perfect. It can produce both false positives and false negatives.

False negatives result from polymorphism in the form of virtual functions and function pointers. When a call to such a function is made, there needs to be an edge from the caller to all of the functions that could be called. For virtual functions, this is all functions overriding the virtual function. For function pointers, I think this is every function of the proper type that is used in a way that it not a call at some point. It should be possible to address this problem, but at a significantly increased complexity compared to what I have. Fortunately for me, Tracemonkey does not use virtual classes much and does not use function pointers in places where it matters for my analysis.

False positives are a little trickier, and can’t be worked around in general. Consider the following:

void foo(bool bar) {
    if (bar)
        // access the PC
void baz() { foo(false); }

Calling baz() will not result in the PC being accessed, but my analysis will report that it can access the PC (since foo() can). Knowing whether a function can actually cause the PC to be accessed is isomorphic to the halting problem and thus undecidable.

The code

My code is available here. expects to find g++ and Dehydra under ~/gcc-dehydra/, as suggested in the Dehydra installation instructions. must be run with its output redirected to contains both the general search algorithm and code that uses it to analyze functions introduced in my inline threading patch.

February 16, 2011 06:02 AM

November 22, 2010

Brian Krausz

How to Catch a Cheater

Update: I wanted to explicitly mention that these homeworks were basically puzzles, which is why Googling was off-limits.

Update 2: Thanks gzak for remembering the actual name of the problem. It’s “Finkleberg’s 101 Game” and a copy of the actual homework is here.

There’s a story making its way around the web about a professor catching 1/3 of his class cheating and the fallout from it (everyone retakes the midterm, people who admit to cheating have to take an ethics course). It reminded me of something one of my professors at Carnegie Mellon did in one of my Freshman year CS classes. This is a from-memory recount of it, exact details are probably slightly off.

For context:

We get to class one day and wait for the professor to start speaking. Today he just starts running through slides silently:

  1. “As many of you know, one of my hobbies is catching cheaters”
  2. A picture of the last homework with the phrase “The Glorblar Problem” circled
  3. A screenshot of Google with “The Glorblar Problem” in the search box
  4. A screenshot of the Google results, with the domain of the first result (which had the problem and correct solution) circled
  5. The whois result for that domain, with the professor’s name circled (you can hear a collective gulp as half the room realized what was coming next)
  6. The apache logs from the webserver, along with blurred-out reverse IP lookups for the entries

Then he started speaking. He told us he had access to the login records from the school’s public machines as well. Anyone who Googled the result should confess and take a 0 for the homework, otherwise he would report them to the dean and they’d fail his mandatory class.

A ton of students confessed. A lot of them were sent the link from a friend and many claimed not to have used it, but nobody could deny that they were caught cheating by the letter of the law.

The moral of the story is don’t fuck with Luis von Ahn, he will wreck you (no, I wasn’t one of the students who cheated, but that was one of the more impressive hacks I’d seen during college).

tl;dr — My professor made a honeypot for cheaters by planting a Google bombed phrase in his homework.

How to Catch a Cheater was originally published in Stories by Brian Krausz on Medium, where people are continuing the conversation by highlighting and responding to this story.

November 22, 2010 07:47 PM

October 22, 2010

Brian Krausz

Force a Canvas Refresh

When playing around with some canvas stuff I found an interesting bug in Chrome. Basically when you have a putImageData call inside of a setInterval loop, canvas does not refresh properly.

There’s a pretty simple hack to fix this: force the canvas to refresh every time you call putImageData. But how do you do this? There is no redraw function, nor any obvious way to force a refresh.

The trick is making a change to some property of the canvas element. Doing this in a way that forces a refresh every frame but won’t actually change anything for the user is not as obvious as you’d think.

Opacity isn’t actually visible with a sufficiently small delta, so it’s a good candidate. We just switch back and forth between two different opacities.

this.ctx.putImageData(img, x, y);
this.rebuild_chrome_hack = !this.rebuild_chrome_hack;
$('#canvas').css('background-color', this.rebuild_chrome_hack ? 1 : 0.999);

Force a Canvas Refresh was originally published in Stories by Brian Krausz on Medium, where people are continuing the conversation by highlighting and responding to this story.

October 22, 2010 03:45 AM

September 14, 2010

Brian Louie

Are you a web developer or designer?

(This is the same post as the one found on the Mozilla Hacks blog. Thanks to Jay Patel for co-writing this with me.)

If you are a web developer or designer, we can use your input. After gaining some great insights from our previous survey on Firefox 3.6 and Firebug 1.5, we have decided to go broader and get a better industry-wide snapshot of web developers.

We have created a new survey in our continued effort to better understand the web developer landscape and how the Mozilla Developer Network can be a better resource within it. Over the past few weeks, we have interviewed web developers about how they work and collaborated with a research consultant to compile a list of questions that will hopefully give us a clearer picture of the people behind the open Web and how we can better serve them.

This survey asks questions about the web development experience: the technologies and resources you use, the communities you join, and the companies that influence the realm of your work. We plan to use your responses to improve our developer engagement efforts and deliver relevant programs and content through the Mozilla Developer Network to make your web development experience better.

Your input is be much appreciated. Take the survey here, and please feel free to share the survey with other web developers and designers. Please pass on this link: to the appropriate lists, forums, blogs, etc.

It is important that the results accurately represent the diverse set of people that make up the web developers of the world, so we hope that you will help us reach other developer communities, including those mentioned in our survey. Together, we can take a step closer to a better open Web. Thanks!

– the MDN team

September 14, 2010 10:20 PM

September 08, 2010

Wei Zhou

Is the browser dying? Is user interface dying? Long live user experience.

I haven’t use my browser for almost a week…yes. I cannot believe it is happening, and it’s happening so fast – people spend less and less time with their browser right now. And as an user experience designer I’d like to say, this fact reflects a change of Internet lifestyle. This is pretty significant in computer history because it announces the death of Matrix, and probably, the death of user interface. It is scary and exciting because people soon need to redefine user experience and the job function of UX designers: We will soon design things without thinking about screens. We will think about creating better service. We will become business designers. Or we will become entrepreneur consultants.

It happened almost like overnight since Apple proudly introduce their iPhone and iPad, followed by a series of other fancy smart phones and mini computer-like devices. People are busy tracking apps instead of browser from the second they wake up in the morning to late night sex time. They are smart enough to realize what they really need is to get rid of all the user interfaces and get to know their best friends’ status, bank balances and next job oppotunites. Now since they can accomplish these tasks without a computer. Sooner or later there’ll be some other vitural devices or service to help them get rid of their smart phones, such as built-in-body-chips or visual augmented sun glasses.

People’s life are simplified. The world will be more visual and more real since we’ll not gonna deal with interfaces but down-to earth UI-free services.

Or is it what we really want? How about system design?

As a hot nerdy girl I both love and hate about system design – it’s  like a sophisticated and but usually over-dressed woman – although we really just want to see the naked body inside, the process to get there could be fancinating and addictive – it has some ritual values. If we don’t design it well, it will turn from a hotty blonde to your mother – who yells at you and makes you want to kill yourself.

So there I come and say, long live user experience, the browser may die some day, smart phone will die some day, but there’s always people issues that require us to solve – we UX designers will seek our ways to thrive and make better services. As long as there’s people, there’s user experience.

September 08, 2010 07:42 AM

August 30, 2010

Brian Louie

Introducing the new MDN website

(This is the same article that can be found on the Hacks blog and is an updated version of the draft that was posted here last week. Thanks to Jay Patel for co-writing this with me.)

This week, Mozilla unveiled the newly redesigned Mozilla Developer Network, the latest incarnation of MDC. The website has evolved over the years and we recently decided to change the name from Mozilla Developer Center to the Mozilla Developer Network (MDN) to better reflect the developer segments that make up our community and provide a better platform for engaging developers in the Mozilla mission and our plans for pushing the open Web forward. In this blog post, we’ll walk you through some of the new features and content, in addition to some of the things you can expect in the months to come.

Our Mission
Upon first glance, the most obvious change is that the website has undergone a radical overhaul: from top to bottom, the entire MDN looks different. Even the tagline underneath the title is new: the Mozilla Developer Network is “a comprehensive, usable, and accurate resource for everyone developing for the Open Web.”

The idea behind the tagline is perhaps the biggest change we’ve made to the MDN: we wanted to create a place where all web developers – not just people who develop using Mozilla technologies – can find the resources they need to make the Internet at-large a better place. This fundamental premise drove many of the design decisions incorporated into the new MDN.

The New Home Page
One of our main goals for this redesign was to streamline and simplify the process of finding information. Because the MDN home page is the first page most people will see when they visit the MDN, we wanted to make sure a diverse set of users could access as much information as possible without cluttering the page.

Throughout the page, there are several opportunities for the user to sign up for an MDN account; if you already have an MDC/Deki account there’s no need to sign up again. If you’re new to MDN, you can easily register to join our community. Members will be able to log in to both the Developer Forums and the Docs Wiki.

The page also features a promo box with revolving panes highlighting important open Web technologies and tools. The goal of the promo box is to direct developers to the pages about technologies that are most likely to be pertinent to their work, which helps reduce the number of steps it takes to reach a desired article. There are a lot of things happening at Mozilla that developers will care about, so this is where we hope to provide every visitor a chance to learn more about those topics.

Moreover, the main content of the page will be available through a tabbed interface that will allow users to click through to whichever section in which they are interested. We currently have Docs and News but have plans to add Tools and Events as well. There are plans for the sidebar as well, but for now we have provided a live Twitter feed so that users can get a feel for what various Mozilla communities are talking about and sharing. We will eventually add trending topics based on activity from around the MDN, streams for conversations in the forums, and the latest web technology demos and experiments.

Site Architecture
The new MDN header contains a search bar and click-through access to various sections through the primary navigation. The MDN’s content has been separated into four main categories, each of which has its own navigation item: Web, Mobile, Add-ons, and Applications. This segmentation of the navigation allows us to organize content into non-overlapping buckets, which should more precisely direct developers to the content they need. Then we have the universal Docs – which takes users to a landing page leading to various articles and content from the Docs Wiki – and Community – which takes users to the Developer Forums for now – navigation items. Since those two areas are applicable to all developer segments, we kept them separate in the navigation layout.

Each content page has a similar format: we feature a few popular articles for each category, some Mozilla-supplied tools, related news and updates, and a feed with relevant tweets. At the bottom of each page, there will eventually be popular forum topics and community comments. (Only the Add-ons page has this content right now). If you don’t find what you’re looking for on the landing page for any given category, there are links in each section of the page that take you to more options and pages. Despite the design overhaul, all of the information from the previous MDC remains intact, so there’s no need to worry about losing important articles. It’s all there and works exactly the same way as before.

Also, any information that can be accessed via the Docs landing page can also be accessed from other pages on the MDN, but we wanted to provide an alternative way of presenting the information: we highlight some important web development topics, in addition to important topics from the other categories. There are also fun features like Doc of the Day and Most Active Docs, in case you’re interested in what everyone else is looking at.

Growing the Community
The final navigation item in the header is for Community and perhaps the most important addition to the new MDN: Developer Forums. In the previous version of the MDN, although there was plenty of documentation to be found, we didn’t provide developers much of an opportunity to talk, discuss, and ask questions. We felt that, in our goal to make the MDN a central hub for web developers, forums comprised an important feature to incorporate into the new version.

Right now, there are five broad topics for discussion: Open Web, Mozilla Platform, MDN Community, Mozilla Add-ons, and Mozilla Labs. These domains should be able to cover much of the wide gamut of available discussion topics, but if not, we can always add new ones. Because the forums are new, they are still in the experimental stage; if you have any feedback for us, just use the feedback link at the bottom of the page. Feel free to start new threads and ask questions about anything, especially if it’s about documentation or the open web in general.

We’d love for you to try out the new forums! Again, if you have an account for MDC/Deki, you can use that to log in; if not, you can use the link in the upper-right corner to become an MDN member.

Submitting Feedback
At the bottom of every page, there’s a link to submit feedback on the new MDN pages. Whether there’s something wrong or there’s something you’d like to see (or whether you’d just like to say hello!), just hit that link and let us know what’s on your mind. We’ll do what we can to integrate your ideas to make the Mozilla Developer Network a better place for all developers.

Next Steps
Though we have made quite a few changes to the Mozilla Developer Network, they certainly are not the last. As the MDN continues to expand, we have decided to create a next-generation Docs platform that the Mozilla web development team will be building on Django, similar to the one being implemented for the new SUMO site. Planning is already underway, and we hope to migrate documentation over sometime in 2011.

Once we’ve converted all the content over, we plan to improve the way you find information via the search bar. So far, we have been devising ways to improve the tagging system and make sure that localized versions of articles are released as soon as possible. In addition, with the help of article rating and commenting systems, we can help make sure that the most relevant and accurate results are mentioned at the top of each search query. And finally, we are building a system that allows community experts in particular fields to regulate editing and writing of articles in their domains.

We’re also looking to expand the Community tab. Though we expect the forums to remain the centerpiece of that section of the site, we hope to also bring you news, updates, and other community-sourced content.

We hope that this has helped you get acquainted with the new Mozilla Developer Network. As always, we are amenable to your feedback and ideas, as we are as eager as you are to make the MDN an even better place for web developers to write, read, and discuss important Web topics. We look forward to hearing from you, and we hope you like the new MDN.

– Jay & Brian (+ the MDN team)

August 30, 2010 08:01 PM

August 24, 2010

Brian Louie

Some cool releases this week

Today, we released the next beta for Firefox 4 – the fourth one, for what it’s worth. This might be the biggest beta release yet. Most notably, it features Firefox Panorama (previously referred to as Tab Candy) and Firefox Sync, which allows you to sync your desktop browser with things like your phone and other computers in your house or office or wherever else. CNET posted a good, unbiased summary of the update earlier today. If you’re interested at all in the future of tab and task management, definitely check out the latest beta.

Also of note: the refreshed Mozilla Developer Network, as I’ve been discussing in many of my posts below, releases tonight, meaning that when you go to tomorrow morning, you should be able to see the hard work Jay, the web development team, and I have been putting into the MDN over the past few weeks (or months). Expect a long post detailing that in the near future!

August 24, 2010 10:00 PM

August 12, 2010

Brian Louie

Firefox 4.0 beta 3 live

Hey, everyone!

Earlier today, the third beta for Firefox 4.0 went live. Most prominent of the changes include support for multitouch gestures on Windows 7 machines and revisions to the JS engine so that pages render faster. If you haven’t gotten it already, find it here.

I’ll be publishing another post soon detailing my work from the past couple of weeks. Stick around!

August 12, 2010 02:23 AM

July 29, 2010

Brian Krausz

GazeHawk Launches!

At least now I have a publicly known excuse for being busy!

Y Combinator Backed GazeHawk Heatmaps With Web Cams

Keep an eye on our blog at for some interesting technology/startup posts.

Time to go deal with my ever-growing inbox ;-).

GazeHawk Launches! was originally published in Stories by Brian Krausz on Medium, where people are continuing the conversation by highlighting and responding to this story.

July 29, 2010 10:06 PM

Brian Louie

Redesigning the MDN (part 2)

[This is a continuation of the post “Redesigning the MDN (part 1).”]

In my previous post I mentioned our fundamental goal is provide web developers with a central hub for documentation and discussion. I’ve talked a bit about how we plan to achieve the former, but what of the latter?

For this reason, we added yet another tab to the MDN’s header, called “Community.” In the newest iteration of the MDN, this tab will primarily serve to host a UserVoice-based forum where web developers can congregate and discuss anything open-web related. As we get that finalized, I’ll update this blog with a list of initial topics so that you can start to think about things to talk about.

Again, when the new MDN goes live sometime in mid-August, expect a more in-depth tour of the Community tab. Also, note that the Community tab will eventually encompass more than forums. Eventually there will also be community-provided news articles and other community engagement efforts via Mozilla. Those should be coming in later iterations of the MDN.

Thanks for reading! As usual, if you have any questions, comment or shoot me an email.

July 29, 2010 09:44 PM

July 28, 2010

Brian Louie

Redesigning the MDN (part 1)

(Redesigning the MDN is a complex project that’s going to take more than one post. In this post, I discuss the reasoning behind some of the more superficial changes I’ve made to the Network.)

Another one of my big projects here at Mozilla has been working on redesigning and re-branding the Mozilla Developer Network (MDN). If you’ve been keeping up with my blog, you’ve already seen some of the work that Jay and I have done with regard to graphically redesigning the network. As I’ll discuss in a bit, we’ve made some important changes since then.

First, it’s important to understand the impetus for change. The Mozilla Developer Center (the current name for the MDN), as it stands, can be found here, if you’re not already familiar with it. Although it certainly looks better than it has in years past, it could still use some fixes here and there. Or all over the place. You can find the slide deck with some pictures of our first draft of the redesign here.

Underneath the superficial overhaul, however, lies a deeper paradigm shift. Although content is pretty well spread out to cover various domains of the open web, the focus currently concentrates on developing on the Mozilla platform specifically. While certainly we want to reward and help those who develop with the tools that we provide them, we feel that perhaps this mindset is a little close-minded. Mozilla’s goal is to advance the open web in every way possible, Mozilla-inspired or not. Consequently a solely inward focus on only Mozilla’s tools is unintuitive and does not allow us to optimize the progress of the open web.

Ultimately, the goal of the Mozilla Developer Network is to provide a central hub for discussion and documentation for open web developers, regardless of platform. The redesign of the MDN cannot lose sight of this goal.

With this goal in mind, we took the original .psd files from our web designers and started making tweaks. As previously mentioned, there are four main documentation headers: Web, Mobile, Add-ons, and Applications. In the currently released design, all of those headers are given equal weight, which runs against our fundamental goal.

Unfortunately, there wasn’t much of a way to change these headlines without breaking the entire header, so we instead decided to revamp the site’s home page to place a greater emphasis on open web technologies at-large. When the new site is live, you’ll be able to see for yourself. There have been several other changes as well; I’ll tour through them on this blog when the new Network goes live.

One thing I’d like to emphasize: though our presentation of information has changed to fit our goal, none of the documentation has changed. You will still find all the information found on the current MDC, from Gecko to info about Mozilla-specific APIs. These articles will continued to be updated as well. Only the presentation of the information has changed.

Expect the site to go live mid-August. Tomorrow, I’ll write about some of the awesome new features that we can expect to see in the next iteration of the MDN and how we plan to facilitate communication between developers.

July 28, 2010 06:19 PM

July 27, 2010

Brian Louie

Firefox 4 beta 2

A quick aside from my summary posts: download the Firefox 4 beta 2 to check out application tabs and a sweet new Mac OS X interface. The alpha for Tab Candy is also available as an early build of Firefox, if you’re interested in trying that out as well.

For a more detailed rundown of the new beta, check out the Hacks blog for a sweet demo from our very own Paul Rouget!

Happy developing!

July 27, 2010 08:12 PM

A global picture of the open web

Hey there! Sorry for the long delay since my previous post; it’s been a busy past few weeks. Since coming back from the summit, my internship has been full of twists and turns, leaving me little time to write.

I’ve been assigned to work on a whole lot of different things. To give equal exposure to all of them, I’ll write several posts to update you on my work these past few weeks.

One of my big projects thus far has been designing an industry-wide web developer survey. Some of you might have seen one of my previous blog entries about the results of the survey we distributed this past March. The findings, while interesting, don’t paint a complete picture of the state of the open web and the developer tools you all like to use. Thus far the surveys we’ve released – the one previously mentioned and one that was released last November – have been distributed via Mozilla channels and have only really inquired about Mozilla platform technologies. We hope to change that.

We hope to get a wider snapshot of the web development community, not just for us, but for you. A panorama, if you will.

The survey will be released in a few weeks – sometime before the end of August – so check back frequently for the opportunity to take the survey and help us paint the picture of the open web. Thus far I’ve iterated through several drafts of the survey and talked to several market research consultants to determine how to best distribute and design the survey. It will be ready soon!

On a more interesting note, I’ve also been reaching out to infographic designers. Although the data obtained from the survey will certainly help us, we want it to help you, too. We want to present our data and conclusions in an aesthetically awesome way. Right now, I’m talking over our goals and the possibilities of an end design with several artists. They all have some pretty good stuff going on; they’ve won awards and made some pretty interesting designs. Stay tuned!

Because we expect such a large audience to be taking the survey, there has been a lot of pressure to get every word and every question right. I never realized just how much goes into designing a questionnaire: figuring out the average time spent to read and answer each question, the best way to structure a question, the most efficient reconciliation of details and the big picture, etc. Given the impact we expect this survey to have on our engagement efforts, the survey must determine with exact precision the information that we’re looking for. Such a statistical conquest, I have discovered, isn’t as easy as it looks.

More blog posts about the other things I’m working on will be coming soon! Thanks for reading.

July 27, 2010 05:54 PM

July 21, 2010

Brian Krausz

The Worst Paragraph Ever Written

Context: There’s an organization that sponsors Shabbat dinners. It’s really awesome: they basically pay you to feed your friends. That being said, they need a copywriter. Here’s a paragraph in the email they sent confirming my sponsored dinner:

As of Monday, July 19th, the new NEXT Shabbat program will begin providing new NEXT Shabbat’s with up to $14 per guest (maximum of 16 guests) for the first three meals a host registers after that date. After those three meals, hosts will receive up to $10 dollars per guest. As a returning NEXT Shabbat host, your meal-payment will be based on the number of meals you’ve held already. However, since you have a meal scheduled to take place between July 19th and August 19th, you will still receive $18 per-guest for that meal. Following that meal, if you’ve already held at least three meals, any meal registered after July 19th will only be eligible for a payment of $10 per guest. By joining with us as we make these changes to the program you will give many more people the opportunity to host and will enable thousands to participate in home-based Shabbat meals for the first time. If you’ve held less than three meals, any meal registered after July 19th will be eligible for a payment of up to $14 through your third meal. You can also see how many meals you have already hosted.

As always apologies for the lack of updates: I promise there’s a short (but major) update coming very soon, followed by more frequent posting (there’s a reason I’ve been so quiet lately).

The Worst Paragraph Ever Written was originally published in Stories by Brian Krausz on Medium, where people are continuing the conversation by highlighting and responding to this story.

July 21, 2010 11:31 AM

July 11, 2010

Brian Louie

Mozilla Summit 2010: day 3

The final day was a little shorter than the first two, but it was arguably the most memorable of the trip.

We woke up to a few lightning talks, which are five-minute demonstrations of various things that Mozillians have been working on. The lightning talks also featured the work of fellow intern John Wayne Hill, who has been working in user experience. Check out his blog at for more info.

Afterward, we went to our penultimate round of breakout sessions. Today I decided to stray from the usual engagement stuff to check out a session on web gaming. We were treated to demos of non-Flash-based games, including one that a fellow Mozillian had coded up on the plane on the way to Vancouver. It was, in short, a good time.

We then had one final round of lightning talks – featuring another fellow intern, Kyle Huey – followed by two more sets of breakout sessions. One of the most interesting breakout sessions featured my mentor, Jay Patel. As I mentioned in a blog post a week or two ago, one of our big projects has been working on the expansion and the redesign of the Mozilla Developer Network (previously called the Mozilla Developer Center). We’ve finished designing the new static pages, which should be going live by the end of the month. Most important in the redesign of these static pages is the categorization of documentation pages into four parts: web, mobile, add-ons, other.

If you want to see what the MDN looks like right now, check out the site. As you can tell, it could use quite the graphical overhaul.

We also hope to clean up the documentation and improve the way people navigate the site. As indicated by the heated discussion at the session, the most difficult obstacle will be localization: it’s hard to coordinate translation of documentation and to keep all of those different translations updated. We plan to keep the site constantly updated with the most updated version of an article in at least one language and to offer incentives for localizers to contribute. Furthermore, we plan to pinpoint contributors’ areas of expertise to streamline the technical review process so that articles are updated in a timely fashion.

If you want more details about the redesign of the MDN, check out the site for a copy of the PowerPoint deck that Jay and I made and also the priorities and requirements documents. If you have anything you’d like to contribute or suggest, please feel free to comment on this article or to contact me at Any feedback is much appreciated!

Equally important is finding ways to get people to contribute to original documentation. Eric Shephard and Janet Swisher gave a great presentation about the implementation of so called “doc sprints,” which will bring together developers and experts in a particular field to draft documentation about a particular topic. If you have ideas to do so, definitely email Eric or Janet at sheppy@mozilla and jswisher@mozilla respectively.

After quite the busy day and an inspiring closing address by Jay Sullivan, we were corralled into gondolas that took us up to the peak of one of the highest mountains in Whistler. There was a great dinner and dance party up there; it was the perfect way to wrap up this opportunity to learn so much awesome stuff and meet so many awesome people. As they said at the closing address, there was “too much awesome.”

I can’t find a better way to describe it. I hope you’ve enjoyed reading my chronicles of the summit. If you have anything to say, don’t hesitate to talk to me!

July 11, 2010 04:05 AM

July 10, 2010

Brian Louie

Mozilla Summit 2010: day 2

Day two was equally as eventful, interesting, and informative as day one.

After starting the day with a breakfast highlighted by muffins and yogurt with berries and honey, we were treated to a couple of keynote addresses from Mitchell Baker and Mike Shaver, two of the most prominent Mozillians. They were inspiring; I’ve never felt safer and prouder of Mozilla’s future.

Soon after, breakout sessions continued. The first breakout session I attended was pretty unique: it compared Mozilla to other nonprofit movements, from the Boy Scouts to Alcoholics Anonymous. Though clearly Mozilla is different from these movements – the difference is especially apparent when analyzing certain of these organizations – but nonetheless there are certain core values that make each of these decentralized organizations successful, from having a unified mission to delineating a specific list of principles. Because precedence has been established, it’s easy to follow these examples and cultivate an effective call to action.

A few hours (and a delicious lunch with ravioli and peach upside-down cake) later, Mozilla threw a “science fair” in which Mozillians from various departments showed off demos of their hard work. The exhibits were diverse; there were some WebGL demos, some mobile demos with Nokia N900s, and there were Bugzilla-related demos. My favorite was Paul Rouget’s usage of HTML5 in Firefox 4 to do crazy things in his web page, like play videos on rotating cubes and scroll along 3D photo walls, all using just HTML, CSS, and a bit of JavaScript. The hall was clogged around Paul’s table; be sure to take a look at it online after the summit is over!

The next breakout session I attended discussed the the telling of the Mozilla mission. The session featured consultants from Engine Company 1, a market research firm. With the help of some data they’ve been collecting over the past couple years, the consultants offered some interesting and rather surprising insights, from the “people” Mozilla and its competitors convey to the branding association between parent company and browser to even the way people look at the Mozilla mascot. The main takeaway is that although Mozilla has awesome products (try out the Firefox 4 beta if you haven’t!), its brand image is not as well defined as its offerings. I accepted this as a personal challenge with regard to the developer community. I’ll update my blog as I continue to make progress over the next couple of months.

Afterward, we were treated to the summit’s World Expo, in which Mozilla branches from countries all around the world set up tables and talked about their countries. I’ll be honest: I really only went to the tables that had free snacks, including some rather interesting salted licorice. But the people were equally interesting! I met programmers from Sweden, marketers from Japan, and community developers from Chile. In short, Mozilla is EVERYWHERE.

The highlight of the event was definitely the dinner. In honor of the international theme of the event, Mozilla set up three dining rooms, each geographically themed. One room was for Asian, one was for Europe, and one was for the Americas. There was so much food that I got full after eating in just one room! But the food was delicious. Despite being full, I couldn’t help but stuff myself a little bit more.

The expo more or less marked the rest of the day, minus some barhopping and such. Stick around for updates on day three!

July 10, 2010 01:20 AM

July 08, 2010

Wilson Lee

Cross-Domain Requests and Prototype

Ever since 3.5, cross-domain HTTP requests have been supported in Firefox. Getting this to work may seem easier and saner than piggy-backing Flash ("simply request a URL of different origin in your XMLHttpRequest!"), but unless you use jQuery, you'll realize that for some reason, cross-domain requests mysteriously fail in your JavaScript framework despite the fact that the server is correctly responding with an appropriate Access-Control-Allow-Origin header.

read more

July 08, 2010 06:47 PM

June 03, 2010

Brian Krausz

How to Dispose of a Hard Drive

In 5 easy steps:

  1. Find hammer
  2. Give up on finding hammer…grab pliers
  3. Attack drive with pliers, destorying circuitry but leaving platters untouched
  4. Cut finger, swear and bleed profusely on drive
  5. Repeat steps 3 & 4 until you realize that nobody cares enough about your data to read it directly from platters. Toss drive in trash, go grab a band-aid


How to Dispose of a Hard Drive was originally published in Stories by Brian Krausz on Medium, where people are continuing the conversation by highlighting and responding to this story.

June 03, 2010 01:50 AM

May 27, 2010

Brian Krausz

Diaspora’s Upfront Costs

This post on Hacker News got me thinking about the costs Dispora’s going to have receiving all of their money and fulfilling their promises, so I did a little digging. The numbers below are estimates, but they should be fairly close to actual costs for these services (assuming no huge burst in donations over the next 6 days):

$ 5510 = $190k * 2.9% Amazon fees[1]
1800 = 6k donations * $0.30 Amazon fees
9500 = $190k * 5% for Kickstarter
4150 = 5000 cds with jewel cases * $0.83 (random googling, I assume the "note from our team" will be on the jewel case insert)
1380 = 400 sheets of stickers * $3.45 (zazzle, 20 sheet of stickers, 2 stickers per person)
10200 = 3000 shirts * $3.40 (customink, Gildan 50/50 1 color on white front only)
9000 = 3000 postage & packaging for shirts + stickers * $3
2000 = 2000 postage & packaging for just stickers * $1
1200 = "Turnkey hosting" for 600 people[2]
2800 = 4 new computers*$700

[1] I’m not sure if Kickstarter gets a bulk discount here, or if it’s Dispora’s account history, so a volume discount may apply. I also assume all transactions are billed at the $10+ rate (2.9%) rather than the [2] Turnkey hosting and phone support are hard to estimate…it may be more if they need to pay for phone support, but I think this is a reasonable number. I’m also including project hosting costs. I know they say plenty of hosting companies have offered their services, but you can’t guarantee that will be there.

Keep in mind that this misses three big points:

  1. Declined credit cards & canceled/fraudulent donations — I have no clue what the expected amount of these is
  2. People who requested a gift for donating who never give their contact info
  3. The huge overhead for packing/sending all of these items, though I imagine a day of pizza and soda for volunteers can get all of them packed (the con I used to help run had similar “mailing parties” that were fairly effective)

With that in mind you’re looking at about 25% of the funding going toward transaction fees and fulfilling the rewards. This leaves around $142,000 left for them, which is still plenty of money. Though I don’t want to think about what an accountant would say about this money and how much he would charge to make it legitimate.

Don’t get me wrong: I think Diaspora is a great idea and I’d love to see it succeed, but I wonder if the founders considered the logistical overhead to this whole thing. Granted, this would have been smaller had they only raised $10k, but it’s still a big task to undertake. I hope they’re ready for it and they don’t let us down.

Diaspora’s Upfront Costs was originally published in Stories by Brian Krausz on Medium, where people are continuing the conversation by highlighting and responding to this story.

May 27, 2010 03:08 AM

Boston WordPress Meetup Example Code

As promised here’s the example code from the meetup I gave last Monday. It’s fairly small, so I only describe what it does briefly in the top comment. Feel free to use it how you wish (though if you make a lot of money off of it you owe me a t-shirt):

Boston WordPress Meetup Example Code was originally published in Stories by Brian Krausz on Medium, where people are continuing the conversation by highlighting and responding to this story.

May 27, 2010 02:01 AM

May 04, 2010

Brian Krausz

On To Bigger Things

Alternative title: OMGWTF I just quit my job!
Alternative alternative title: A better way to resign from a company

That’s right. I just gave notice that I’m leaving a great job with awesome coworkers and interesting problems. I’m trading all of that for the privilege of moving across the country, not taking a salary, and working 80 hours a week. If I’m lucky, I’ll get to continue doing this for a long time, rather than finding another job that pays actual money.

Shorter explanation: I’m moving to Silicon Valley to start a business.

Not too much to say right now about the startup. I got a funding offer I couldn’t turn down for an idea I really believe in. Expect more posts in the future, since they valley always does inspire more blogging in me. For now I don’t want to say too much about the idea itself, since we’re still fleshing a lot of things out, but I promise there will be some awesome posts about it in the future.

However, there is something to say about my current employer, TripAdvisor. Shortly put, they’ve been amazing. I got to work on awesome features (like mobile flight search, which just launched today… from a mobile phone). And not just work on them but have a major say in their direction. My coworkers were a lot of fun, and my boss was flexible and understanding.

I knew before I graduated that I would eventually leave my first employer to start a business (though I didn’t think it would be so soon). TripAdvisor made that an incredibly hard thing to do, which I commend them for. Hopefully I’ll be hiring soon, but until I open a Boston office, definitely apply for a position with them. If you’d like a contact within the company (not necessary, the engineering HR manager is a solid guy and will get your resume to the right place), just ask.

I’ll be out in Mountain View in early June. See you on TechCrunch :).

On To Bigger Things was originally published in Stories by Brian Krausz on Medium, where people are continuing the conversation by highlighting and responding to this story.

May 04, 2010 01:20 AM

April 27, 2010

Wei Zhou

Tencent QQ UX design – Chinese user experience products is picking up…are we ready for the combat?

This article is about an reflection on a user experience design of a popular Chinese chatting software – QQ. I think UX experts can learn from about International digital products business development and culture barrier issues by studying their case. QQ has proven to be the best chatting software among all existing ones(it has the richest user experience and the most users) – it’s business models deserve to be studied by western cultures.  Chinese has a unique way of thinking user experience design in terms of life-enhancing patterns – we’ve seen this country is moving from imitation to innovation. For me this process is thrilling – Western investors and entrepreneurs, is it too late to enter China?

– Wei Zhou

I’ve recently seen an article from Tencent QQ official site:


Background: Tecent QQ is the dominating instant messaging software in China. I personally believe it will be the dominating reasons for Facebook and Twitter to fail in China. This software has existed for over 20 years and it’s influence is encompassing. Chinese’ three generations are using it and it togethers with Baidu(A chinese Google), Taobao(a Chinese eBay) and Xiaonei/Kaixin(Chinese facebook) to form Chinese digital UX patterns. Recently Tencent officially announced their new prototype. It’s claim for practicing “natural interface design” reflects some Zen thoughts. It’s fresh and original interface is worth to see.

Development process(translated from their official site):

This prototype is Tencent QQ’s first NUI(Natural User Interface)product. In addition to the basic instant messaging feature, it also includes dynamic profile pictures, dynamic backgrounds, Multi-tab windows, 3D Interaction, Vector Interact and Desktop friends short-cut…

The original interface lights up the users: It triggers a lot more emotions. The interface is life-enhancing. It delivers an imaginary space beyond time and space dimensions. The expressions of interface is bold and colorful…It supports better mouse interaction and touchscreen interaction integration.

There’s two trends involved in the design process: Personified and materialized processes. Keywords are: Life force, time and space. This means the product design concerns day-to-day life stories. The interaction model should have a “breathing” effect.Design changes in different time settings, etc.,

Brain storming process:

Wireframes and prototyping:

Interesting pane emotion selector:

Visual Languages:

Profile Manager:

Friends management:

Dynamic backgrounds:

Pie panel enhances touchscreen interaction:

I will add QQ’s business models tomorrow. They did a great job on integrating online Ads into this product smoothly and seamlessly.

April 27, 2010 09:27 PM

April 22, 2010

Ehren Metcalfe

RTL level function removal

Over the past few days I’ve been focusing on getting the call graph portion of my dead code analysis in check. It turns out that function local const (or static) initializations are not accessible during later gcc passes. Luckily, walking the front end tree representation, which is accessible via Treehydra‘s process_cp_pre_genericize, does the trick. This takes care of all remaining false positives of which I am aware.

The downside is that going through all these extra tree codes is sloooow. After a bunch of false starts the damn thing is still running on one of the CDOT‘s development machines (probably an 8+ hour compile).

For a little while though, I’ve been thinking of ways to automatically remove identified dead code without actually patching the source. By applying Taras’ assembler name patch to dehydra I can now identify precisely which code to remove. The question now is how to remove it.

My first thought was a hack using objcopy. I could first output a list of the mangled names of dead functions and then, after running a build with -ffunction-sections, run a script like this:

while read asmname
  find objdir -name "*.o" | xargs objcopy --remove-section=".text.$asmname"
done < asmlist.txt

(and then relinking). This works, but only for non-member functions.

The other option was some sort of gcc hack to remove as much code as possible when obcopy can’t do the job. I first tried removing every statement in every basic block of the function (see gsi_remove). This seems to only work with a non-branching cfg however (even when I leave a valid return statement). I then tried cgraph_remove_node with an IPA pass plugin which blows up if a function’s referenced anywhere else.

Today I arrived at a solution that, although requiring a direct patch of GCC, seems to be ideal. Surprisingly, it’s possible to hook in right before assembly generation, and it’s easy too:

diff --git a/gcc/final.c b/gcc/final.c
--- a/gcc/final.c
+++ b/gcc/final.c
@@ -4090,16 +4090,34 @@
   if (symbol_queue)
       free (symbol_queue);
       symbol_queue = NULL;
       symbol_queue_size = 0;
+static bool
+is_dead(const char* name) 
+  char asmname[100];
+  FILE* fp = fopen("/home/ehren/asmlist.txt", "r");
+  if (!fp)
+    return false;
+  while (fscanf(fp, "%s", asmname) != EOF) {
+    if (strcmp(asmname, name) == 0) {
+      fclose(fp);
+      return true;
+    }
+  }
+  fclose(fp);
+  return false;
 /* Turn the RTL into assembly.  */
 static unsigned int
 rest_of_handle_final (void)
   rtx x;
   const char *fnname;
@@ -4109,17 +4127,19 @@
   x = DECL_RTL (current_function_decl);
   gcc_assert (MEM_P (x));
   x = XEXP (x, 0);
   gcc_assert (GET_CODE (x) == SYMBOL_REF);
   fnname = XSTR (x, 0);
   assemble_start_function (current_function_decl, fnname);
   final_start_function (get_insns (), asm_out_file, optimize);
-  final (get_insns (), asm_out_file, optimize);
+  if (!is_dead(fnname)) {
+    final (get_insns (), asm_out_file, optimize);
+  }
   final_end_function ();
   /* ??? The IA-64 ".handlerdata" directive must be issued before
      the ".endp" directive that closes the procedure descriptor.  */
   output_function_exception_table (fnname);

With this, functions are removed completely (except from the symbol table) and the bodies of virtuals are replaced with a couple words worth of NOPs. Yes, opening up the file with a hardcoded path for every function is ugly but later on I could always read it into a global somewhere else (and do a binary search).

The only downside here is I won’t get any link time errors if a few false positives slip through, as opposed to with objcopy. In my experiments, calling a function that doesn’t exist results in an immediate segmentation fault (makes sense), but storing the return of a NOP-body virtual just leaves you with an uninitialized value.

Hopefully, I’ll soon have some good results on the analysis front to actually test this on Mozilla.

April 22, 2010 04:38 AM

April 08, 2010

Ehren Metcalfe

Dead code progress

So far things are on track with my attempts to developed an unused function finding tool. Now that the function pointer/jump table problem has been solved other more subtle issues have come to light.

The first was a problem with callgraph‘s handling of inheritance chains. As I mentioned previously, it was necessary to add each method to the node table (see schema reference) both when the method’s body is processed (as is already the case) but also when the method’s type is processed. At some point I should really develop some tests here but this affects the recognition of pure virtual functions in a number of complicated cases.

However, I also ran into another issue where certain methods were not finding themselves into the inheritance chains in which they belong. This seems to be only when a virtual function overrides a base class function that has not been defined in the ‘next up’ base class (A defines virtual foo, B derives from A, C derives from B and redefines virtual foo). This could be a Treehydra issue or a maybe a problem with the GCC binfo data structure (or maybe I’m just misunderstanding things).

Either way, my solution has been to process all base classes and subclasses of a method both when the type is processed and also when the method body is processed. This appears to be a working solution (although it certainly does not improve callgraph compilation times).

Once this was handled I started to get some pretty good results but I noticed scriptable methods were getting into the mix. After a few hours of fruitless hacking it turned out I just forgot to properly define NS_SCRIPTABLE (since I’m not running a –with-static-checking build). After rebuilding again, I believe I finally attained a 0% false positive rate.

This time I hit another problem though. A bunch of genuinely dead methods turned up by my most recent (defective) analysis were not showing up. In fact, very few methods were showing up at all. Investigating, it turns out I’ve been quite overzealous in marking a method as scriptable. My previous technique was to check if __attribute__((user("NS_script")) was present in the declaration attributes of a function and also to check if it is present in the type attributes of the class and any base class. This excludes a bunch of juicy dead stuff like nsCharsetMenu::SetCharsetCheckmark (gimple isn count 75 ftw) which is a member of a non-scriptable class that derives from two scriptable interfaces (which do not declare SetCharsetCheckmark).

Naturally, the solution when marking methods scriptable because of their base classes is only to mark the method scriptable when the base class declares the method and is scriptable. Come to think of it, I probably don’t even need to do this because of the way I group together base and derived methods.

Anyway, my current status is waiting for a build with these changes to finish. We will see if there are more issues.

April 08, 2010 02:04 AM

April 06, 2010

Ehren Metcalfe

Function declaration escape analysis v2

I don’t want to get too excited until I’ve run this through 4000000 loc but I believe I’ve solved the problem of being unable to process global initializations of const/static global variables. Earlier, I posted a message to the GCC mailing list describing my troubles with varpool. I did receive a helpful response in that there is nothing inherent about const/static declarations that would prevent one from getting at the lhs of their initialization.

Today I experimented with walking as many trees in as many places as I could find without much luck. I then tried compiling this sample code with -Wunused:

int foo() {
  return 0;

typedef struct {
  int (*p) ();
} Table;

static Table t[] = {
  { foo }

As expected, GCC warns about the unused static variable.

Taking a look at toplevel.c It didn’t take too long to find a solution (and this works for const, static, and const static):

Index: gcc/toplev.c
--- gcc/toplev.c  (revision 157978)
+++ gcc/toplev.c  (working copy)
@@ -844,12 +844,26 @@
   return output_something;

+static tree 
+find_funcs_callback(tree *tp, int *walk_subtrees, void *data)
+  tree t = *tp;
+    fprintf(stderr, "address held: %s\n", IDENTIFIER_POINTER(DECL_NAME(t)));
+  return NULL_TREE;
 /* A subroutine of check_global_declarations.  Issue appropriate warnings
    for the global declaration DECL.  */

 check_global_declaration_1 (tree decl)
+  if (DECL_INITIAL(decl))
+    walk_tree(&DECL_INITIAL(decl), find_funcs_callback, NULL, NULL);
   /* Warn about any function declared static but not defined.  We don't
      warn about variables, because many programs have static variables
      that exist only to get some text into the object file.  */

I’ve got to fix up the output to match callgraph‘s serialization but this could be it.

April 06, 2010 07:46 AM

April 05, 2010

Ehren Metcalfe

Problems with const static initializations

Unfortunately I spoke too soon about developing an airtight dead code finder. The technique of processing file scope variables I mentioned in my previous post has a serious drawback: it doesn’t work for const static data. This is a show stopper when it comes to peeking into most jump tables.

I’ve been able to print type information for all globals using the dehydra hooks placed into c-common.c however it seems like const initializations are not even handled at this level. I have my suspicions that there’s no way to recover the FUNCTION_DECL node in this case, likely because gcc has no use for the info at this level.

Although I may be able to make do by simply manually filtering as many callback functions as I can this approach is not quite ideal. I’ll have to think more about this but I’m now thinking that the lto streamer might be of use. There’s also the chance that there’s another way of using the cgraph to get at this data.

The other possibility is ditching gcc entirely and using elsa to dump the data. I’ll report back when I know more.

April 05, 2010 05:02 AM

April 04, 2010

Ehren Metcalfe

Function declaration escape analysis

It’s been quite a while since I’ve blogged. Frankly I’ve got behind with my work. However, I believe I have hit a huge breakthrough finding dead code. I don’t want to get too ahead of myself but if things work as I think, I can now identify every unused function/member function in mozilla-central with a near 0 false positive rate.

I’ve mentioned many times that assignments to function pointers are a huge pain when it comes to recognizing call graph edges. I’ve been able to handle function local address taking for some time now but the problem of functions referenced in global variables (usually jump tables) has proved elusive. In fact, it’s currently not possibly to process global variables at all with Treehydra. This leads to thousands of false positives in the analysis and a bunch of special case handling.

It suddenly dawned on me that GCC might have already done the work for me. ipa-type-escape.c, which “determines which types in the program contain only instances that are completely encapsulated by the compilation unit” seems to fit the bill, but I ended up with less than stellar results trying to print out any escaping function declarations. In fact, I don’t think it really is useful for my purposes.

However, the technique of processing global variables using the varpool is exactly what I needed. In fact, I can very easily write a GCC plugin to print off all the globally escaping declarations in a compilation unit. Unfortunately, getting a plugin to build that uses more than the standard set of routines is a bit of a challenge (more stuff needs to be linked in) so I just hacked it into dehydra_plugin.c. It works though!

Here’s the code:

static tree find_funcs_callback(tree *tp, int *walk_subtrees, void *data) {
  tree t = *tp;
    // dump function (I use dehydra specific code)
  return NULL_TREE;
static void find_funcs(tree decl) {
  walk_tree(&decl, find_funcs_callback, NULL, NULL);

// This needs to go in the execute function of an IPA pass.
// I just stuck it into dehydra's gcc_plugin_post_parse
struct varpool_node *vnode;

Now all I have to do is mark every function printed by this routine as escaping. I’ve been able to match callgraph‘s serialization almost completely so this will be a breeze.

I’ve also found that the current way callgraph treats inheritance chains (using a table of ‘implementor’ – ‘interface’ pairs) is not particularly useful for finding dead code. In fact, a whole bunch of functions in the implementors table are being left out of the node table. I’ve been able to rectify this by treating method overriding just like any call edge. In particular, I have the base class ‘call’ the derived class which will fit right into my existing algorithm (becuase of dynamic dispatch, once the base method is called all bets are off on whether or not some derived method is called). In order to get any results previously, I’ve just been identifying base method-derived method pairs by textually matching the prototypes. Looking at my current results, I’ve found a disproportionate number of static methods which suggests that this technique is too conservative. Here’s the old script btw. With these callgraph chages the next one will be much simpler (with no method name parsing!)

I’ve had some success already with dead code in content and particularly in layout, even with this rudimentary script. I also certainly have more to file. With this new approach, though, I think I’ll be able able to find and file all of it by release 1.0.

April 04, 2010 03:13 PM

March 05, 2010

Wei Zhou

It’s really not all about money

When I studied Graphic Design in Yale University, my professor told me a universal truth: “When you choose a career path the primary thing is figure out if you would enjoy the details.” I realized at a young age that I don’t enjoy choosing typefaces among a thousand of options, so I quit graphic design. Among these years, I’ve watched so many types of jobs: I would never be an investment banker because I don’t enjoy correction typos or copying pasting numbers in the pitch book. I would never be an programmer because debugging kills my skin cell. And I would never be a teacher because I don’t enjoy talking to dumb kids. Now I work as a senior consultant in user experience industry, I realize sometimes don’t like using visios to create wireframes or photoshop creating visual design comps. This baffles me for a while: how could I ever fulfill and satisfy myself? I think this article(by josh Kaufman) answers this question well: you need to know yourself well enough.

Here’s a deceptively simple question: why do people work? On the face of it, the answer seems relatively straightforward:

The 3 Core Levels of Material Need

Level 1: Resources
Working for immediate needs like food & shelter; living paycheck to paycheck.

Level 2: Security
Working to ensure safety; saving and investing for future needs.

Level 3: Freedom
Working to ensure self-sufficiency and independent choice of action.

These three levels of work are similar to the first few levels of Maslow’s Hierarchy of Needs or ERG Theory: work is a way we can meet our basic existential needs effectively and reliably.

That’s a perfectly reasonable explanation, but here’s where things get interesting: what happens when people have enough resources to do whatever they want? What does “Level 4” look like?

Level 4: Primary Motivation

Consider individuals like Warren Buffett, Steve Jobs, Dick Cheney, and Angeline Jolie. Each of these individuals has enough money to ensure that they never need to work again – they could quit tomorrow and live off of their savings in perpetuity. For some reason, however, they don’t – they keep working. Why?

After considering this question, I think that people who have reached the “Freedom” stage of work make a choice (either explicitly or implicitly) about what they’re ultimately working for. The choice ultimately revolves around what that person values most: power, status, pleasure, creation, or quality.

#1: The Autocrat

The Autocrat’s primary motivation is power and control. Common behaviors include continually seeking influence or control over the lives and actions of other people. Examples: businesspeople turned politicians like Henry Paulson (US Secretary of the Treasury), Dick Cheney (US Vice-President), and Michael Bloomburg (mayor of New York City).

#2: The Narcissist

The Narcissist’s primary motivation is attention, status, and fame. Common behaviors include continually seeking the attention and esteem of other people, and acting in ways that will ensure they receive more and more attention. Examples: actors / celebrities like Lindsay Lohann, Brittany Spears, and Madonna.

#3: The Hedonist

The Hedonist’s primary motivation is pleasure and enjoyment of material goods. Common behaviors include the continual acquisition of luxurious homes, fine food, and exotic travel. Examples: Larry Elison (CEO of Oracle), Mohammed bin Rashid Al Maktoum (Sheikh of Dubai), and Paul Allen (co-founder of Microsoft).

#4: The Architect

The Architect’s primary motivation is creating something new or reshaping the world. Common behaviors include the establishment of a vision of what the world “should” look like, then continually pursuing projects that they believe will bring the world closer to that ideal. Examples: Steve Jobs (CEO of Apple), Richard Dawkins (biologist and lecturer), Muhammad Yunus (father of micro-lending), and politicians like Ron Paul, Denis Kucinich, and Al Gore.

#5: The Craftsman

The Craftsman’s primary motivation is quality and enjoyment of the work. Common behaviors include the continual exercise and improvement of a set of specific skills or abilities and use of those skills as a means of self-expression. Examples: Warren Buffett (investor and CEO of Berkshire Hathaway), J.K. Rowling (author), and Stephen Spielberg (filmmaker).

Why is Primary Motivation Important?

Here’s my first hypothesis: once you identify your primary motivation, you’ll find it much easier to achieve your goals. These primary motivations appear to be relatively universal, and are based on very deep-seated psychological needs. Tapping into these sources of motivation directly allows people to accomplish their actual objectives more quickly, whatever they might be. Said another way, it’s easier to get what you really want if you identify what you really want.

My second hypothesis is that not all of these primary motivations lead to a lasting sense of personal satisfaction and fulfillment. If you’re an Autocrat, there will always be people you do not control. If you’re a Narcissist, there will always be people who look down on or ignore you. If you’re a Hedonist, the hedonic treadmill ensures that every pleasure eventually fades. If you’re an Architect, the world seems to have a tendency to stubbornly refuse to conform to your ideals.

When I look at the universe of “successful” people in the world, it appears that the Craftsman has the best shot at lasting personal satisfaction and fulfillment. You ultimately can’t control the world or other people, but you can control your dedication to perfecting your craft and expressing yourself through your work.

If we have a choice in determining our primary motivation, it seems that the Craftsman’s ethos has the most to offer: it may eventually lead to power, status, pleasure, and world-changing achievement, but it frees us from the perception that our self-worth depends on any of these things. That’s a remarkable combination.

Thoughts? Leave them in the comments.

March 05, 2010 08:41 PM