Skip to main content

Reproducible Research at the AAAS 2011 meeting in Washington, DC

Update: added links to other related posts, significantly expanded the section on Git and Github for scientific work.

Link summary: Page with abstracts and slide links, Victoria Stodden's blog, Mark Liberman's blogmy slides and extended abstract, audio (my talk is at time 53:25 to 1:10:47).

At this year's AAAS meeting, currently taking place in DC (in unseasonably warm and sunny weather), Victoria Stodden from the statistics department at Columbia, organized a symposium titled The Digitization of Science: Reproducibility and Interdisciplinary Knowledge Transfer that was very well attended.

Lessons from the Open Source software world

I have tagged this post with "Python" because my take on the matter was to contrast the world of classic research/academic publishing with the practices of open source software development, and what little I know about that (as well as some specific tools I mentioned, like Sphinx), I picked up from the world of open source scientific Python projects I'm involved with, from IPython onwards. My argument is that the tools and practices from the open source community in fact come much closer to the scientific ideals of reproducibility than much of what is published in scientific journals today.

The OSS world is basically forced to do this, because people across the world are collaborating on developing a project from different computing environments, operating systems, library versions, compilers, etc. Without very strong systems for provenance tracking (aka version control), automatic testing and good quality documentation, this task would be simply impossible. But many of these tools can be adapted for use in everyday scientific work; for some use cases they work extremely well, for others there's still room for improvement, but overall we can and should take these lessons into everyday scientific practice.

In my talk, I spent a fair amount of time discussing the Git version control system, not in terms of its technical details, but rather trying to point out how it should not be viewed just as a tool for software development, but instead as something that can be an integral part of all aspects of the research process. Git is a powerful and sophisticated system for provenance tracking that automatically validates data integrity by design: Linus Torvalds wanted to ensure that every commit operation is signed with a hash of its contents plus the hash of its dependencies (for details on this, his sometimes abrasive Google Tech Talk about Git is an excellent reference). This simple idea ensures that a single byte change anywhere in the entire repository can be detected automatically.  I keep an informal page of Git resources for those looking tot get started.

I use Git for just about all my activities at the computer that require manually creating content, with repositories not only for research projects that involve writing standalone libraries, but also for papers, grant proposals, data analysis research, and even teaching. Its distributed nature (every copy of the repository has all the project's history) makes it automatically much more resilient to failures than a more limited legacy tool like Subversion and its strong branching and merging capabilities make it great for exploratory work (something that is painful to achieve with SVN). Git is also the perfect way to collaborate on projects: all members have full versioning control, can commit work as they need it, and can make visible to collaborators only what they deem ready for sharing (this is impossible to do with SVN). Writing a multiauthor paper or grant proposal with Git is a far saner, more efficient and less error prone process than the common madness of emailing dozens or hundreds of attachments every which way between multiple people (for those who think Dropbox suffices for collaborative writing: it's like using a wood saw for brain surgery; Dropbox is great for many things and I love it, but it's not the tool for this problem). I have also used Git for teaching, by creating a public repository for all course content and individual repositories for each student that only the student, the teaching assistants and myself can access. This enables students to fetch all new class content with a simple:
git pull
instead of clicking on tens of files in some web-based interface (the typical system used by many universities). A single clone operation can reconstruct the entire class repository on another computer if they need to use it in more than one place or lose their old copy. And when it's time to submit the homework, instead of emailing or uploading anything, all they need to do is:
git push
and the TAs have immediate access to all their work, including its development history. In this manner, not only is the process vastly smoother and simpler for all involved, but the students learn to use version control as a natural tool that is simply part of their daily workflow.

I also tried to highlight the role played by the GitHub service as an enabler of collaboration. While Git can be used (and it is extremely useful in this mode) on a single computer without any server support, the moment several people want to share their repositories for collaborative work, some kind of persistent server is the best solution. GitHub, a service that is free for Open Source projects and that offers paid plans for non-public work, has a number of brilliant features that make the collaboration process amazingly useful. Github makes it trivial for new contributors to begin participating in a project by forking it (i.e. getting their personal copy to work on), and if they want their work to be incorporated into the project, they can make a pull request. The original authors then review the proposed changes, comment on them (including making line-specific comments with a single click), and once all are satisfied with the outcome, integrate them. This is effectively a public peer review system that, while created for software development, can be equally useful for collaborative authorship of a research project.

I should add, however, that I think there's still room for improvement regarding Git as a tool for pervasive use in the scientific workflow. As much as I absolutely love Git, it's a tool tailored for source code tracking and its atomic unit of change is the line of code. As such, it doesn't work as conveniently when tracking for example changes in a paper (even if written in TeX), where a small change can reflow a whole paragraph, showing a diff that is much larger than the real change. In this case, the "track changes" features of word processors actually work better at showing the specific changes made (despite the fact that I think they make for a horrible interface for the overall workflow) [Note: in the comments below, a reader indicates that the --word-diff option solves this problem, though I think it requires a very new version of Git, 1.7.2 at least. But it's fantastic to see this kind of improvement being already available]. And for tracking changes to binary files, there's simply no meaningful diff available. It would be interesting to see new ideas for improving something like git for these kinds of use cases.

I wrapped things up with a short mention of the new Open Research Computation journal, where Victoria and I are members of the editorial board, as well as several well-known contributors to the scientific Python ecosystem, including Titus Brown, Hans-Petter Langtangen, Jarrod Millman, Prabhu Ramachandran and Gaël Varoquaux.

Other presentations

I spoke after Keith Baggerly and Victoria. Keith presented an amazing dissection of the (ongoing) scandal with the Duke University cancer clinical trials that has seen extensive media coverage. This case is a bone-chilling example of the worst that can happen when unreproducible research is used as the base for decisions that impact the health and lives of human beings. Yet, despite the rather dark subject, Keith's talk was one of the most lively and entertaining presentations I've seen at a conference in a long time. Victoria discussed the legal framework in which we can begin considering the problem of reproducible computational research; she was instrumental in the NSF's new grant guidelines now having a mandatory data management plan section. She has the unique combination of both a computational and a legal background, which is very necessary to tackle this problem in a meaningful way (since licensing, copyright and other legal issues are central to the discussion).

Afterwards, Michael Reich from the Broad Institute presented the GenePattern project, an impressive genomic analysis platform that includes provenance tracking and workflow execution, as well as a plug-in for Microsoft Word to connect documents with the execution engine. While the Word graphical user interface would likely not be my environment of use, the GenePattern system seems to be very well thought out and useful. The last three talks were by Robert Gentleman of BioConductor fame, David Donoho --Victoria's PhD advisor and a pioneer in posing the problem of reproducibility in computational work together with Jon Claerbout, and finally Mark Liberman of U. Penn (see Mark's blog for his take on the symposium).

I think the symposium went very well; there was lively discussion with the audience and good attendance. A journalist made a good point on how improvements on the reproducibility front are important for them, when they are trying to do their job of reporting to a sometimes skeptical public the results of scientific work. If our work is made available with strong, credible guarantees of reproducibility, it will be that much more easily presented to a society which ultimately decides whether to support the scientific endeavor (or not).

There is a lot of room for improvement, as Keith Baggerly's talk painfully reminded us. But I think that finally the climate is changing, and in this case in the right direction: the tools are improving, people are interested, funding agencies are modifying their policies and so are journals.

Comments

chuck said…
Very interesting. I've often wondered how well validated a lot of the old fortran models were. And the mess of the CRU code was an eye opener. Reproducing results, even when done by the same people who did the original work, can be a real job given the lack of documentation and lax code versioning that is usual when results have to go out on a schedule.
Anonymous said…
[Git] doesn't work as conveniently when tracking for example changes in a paper (even if written in TeX), where a small change can reflow a whole paragraph, showing a diff that is much larger than the real change.

Git has a word-diff mode that is very useful for such things. Try 'git
diff --color-words' for instance. You might want to define the following
aliases in your ~/.gitconfig:
wdiff = diff --color-words
wshow = show --color-words
Fernando Perez said…
@chuck, the sad thing about the CRU mess is that it's par for the course in a lot of computational scientific work. And it's not an issue of malfeasance or fraud, rather that the structure of incentives and the culture of publication without explicit access to the raw data and code makes this kind of thing the almost inevitable outcome.

@dlaxalde, thanks for the --word-diff tip! It seems I need to update to 1.7.2, which isn't in Ubuntu 10.10 yet, but I'll do a local build later. It's great when you complain about a missing feature, just to find out it was implemented in the very latest release :)
jrovegno said…
Muy interesante profesor Perez.
Durante el 2008 yo le propuse a mis profesores, acá en Chile, una idea similar a la que expone, relacionada a las lecciones del Opensource para enseñar Ingeniería{1}, la verdad es que no me entendieron y me ignoraron.
Pero al menos ahora veo que no soy el único que piensa este tipo de cosas.

Saludos

Referencia
{1} Collaborative fast learning
Unknown said…
@fernando: the --word-diff is new, but --color-words is very old, I think I use it for at least a year.
Fernando Perez said…
@ondrej, thanks for the tip! I keep learning useful git tricks :)
Fernando Perez said…
@jrovegno, no abandone el esfuerzo, pues estas ideas se van abriendo paso poco a poco y con seguridad que van penetrando y logrando cambio.

Popular posts from this blog

Blogging with the IPython notebook

Update (May 2014): Please note that these instructions are outdated. while it is still possible (and in fact easier) to blog with the Notebook, the exact process has changed now that IPython has an official conversion framework. However, Blogger isn't the ideal platform for that (though it can be made to work). If you are interested in using the Notebook as a tool for technical blogging, I recommend looking at Jake van der Plas' Pelican support or Damián Avila's support in Nikola . Update: made full github repo for blog-as-notebooks, and updated instructions on how to more easily configure everything and use the newest nbconvert for a more streamlined workflow. Since the notebook was introduced with IPython 0.12 , it has proved to be very popular, and we are seeing great adoption of the tool and the underlying file format in research and education. One persistent question we've had since the beginning (even prior to its official release) was whether it would...

The IPython notebook: a historical retrospective

On December 21 2011, we released IPython 0.12 after an intense 4 1/2 months of development.  Along with a number of new features and bug fixes, the main highlight of this release is our new browser-based interactive notebook : an environment that retains all the features of the familiar console-based IPython but provides a cell-based execution workflow and can contain not only code but any element a modern browser can display.  This means you can create interactive computational documents that contain explanatory text (including LaTeX equations rendered in-browser via MathJax), results of computations, figures, video and more.  These documents are stored in a version-control-friendly JSON format that is easy to export as a pure Python script, reStructuredText, LaTeX or HTML. For the IPython project this was a major milestone, as we had wanted for years to have such a system, and it has generated a fair amount of interest online. In particular, on our mailing list a us...

An ambitious experiment in Data Science takes off: a biased, Open Source view from Berkeley

Today, during a White House OSTP event combining government, academia and industry, the Gordon and Betty Moore Foundation and the Alfred P. Sloan Foundation announced a $37.8M funding commitment to build new data science environments. This caps a year's worth of hard work for us at Berkeley, and even more for the Moore and Sloan teams, led by Vicki Chandler , Chris Mentzel and Josh Greenberg : they ran a very thorough selection process to choose three universities to participate in this effort. The Berkeley team was led by Saul Perlmutter , and we are now thrilled to join forces with teams at the University of Washington and NYU, respectively led by Ed Lazowska and Yann LeCun . We have worked very hard on this in private, so it's great to finally be able to publicly discuss what this ambitious effort is all about. Most of the UC Berkeley BIDS team, from left to right: Josh Bloom, Cathryn Carson, Jas Sekhon, Saul Perlmutter, Erik Mitchell, Kimmen Sjölander, Jim Sethia...