Interactivity in Paris

Added by Stephane Goldstein on 14 February 2010 02:30

Interactive publications are cool, really cool. Journal articles can be brought to life with a wealth of features that enable readers to access and manipulate underpinning information, visualise data in all sorts of dynamic (and often attractive) ways, change the way that they are presented and modelled, in short, really get under the virtual skin of the written text.

So apparently all is well, and the recent Winter Workshop of the International Council for Scientific and Technical Information (ICSTI), held in Paris on 8 February, was a hugely interesting opportunity to learn about some of the relevant developments: 3-D data visualisation embedded in interactive PDFs, JMol for chemical structures, Elsevier’s ‘Article of the Future’, to name but some. But… but… it’s difficult not to have a nagging feeling that all these features, whilst hugely enriching the reading and learning experience, also lead to increasing burdens and overheads: on authors, reviewers, editors and readers themselves, all of which necessarily need to devote even more time and effort to either produce or absorb the enriched article. We all know, of course, that time and effort are precious commodities in a world where the volume of journal articles and other outputs of research does not cease to grow (to his credit, Jan Vesterop eloquently made that point in his presentation on the Knewco discovery and conceptualisation tool). That’s even before we start thinking of costs…

Don’t get me wrong: I’m not a Luddite opposed to what are evidently very exciting developments. However, I am concerned about how an already overburdened scholarly communication system would cope in the eventuality that such enriched articles become much more prevalent. In particular, I would like to have heard a bit more in Paris about the challenges that these developments pose for the peer review process. Reviewing conventional articles is one thing; validating all the data underlying them is another. Do reviewers have the time, inclination, capacity or even ability to do this? And what are the implications for the quality assurance of these data themselves, as distinct research outputs in their own right?

Perhaps it’s not a bad thing to expose peer review to this sort of strain. It might open up some very interesting questions about its scope and meaning in an environment where research ouputs cannot be simply reduced to a few pages of text.

© Research Information Network 2005–2009