DocMaps is a community-driven framework that will meet three key requirements for representations of editorial processes in a healthy publishing ecosystem:
Extensibility: the framework should be capable of representing a wide range of editorial process events, ranging from a simple assertion that a review occurred to a complete history of editorial comments on a document to a standalone review submitted by an independent reviewer
Machine-readability: the framework should be represented in a format (e.g. JSON-LD) that can be interpreted computationally and translated into visual representations.
Discoverability: the framework should be publishable such that events are queryable and discoverable via a variety of well-supported mechanisms.
In the summer and fall of 2020, a Technical Committee of leading publishers, technology and infrastructure developers, review services, taxonomy definers, and open science advocates convened to create an initial proposed Framework for DocMaps.
Starting in the spring of 2021, an informal Pilot Working Group formed out of the Technical Committee to implement a pilot version of Docmaps. The initial pilot focuses on using DocMaps to send information about community preprint evaluations from eLife’s Sciety aggregator and EMBO’s Early Evidence Base aggregator to Cold Spring Harbor Laboratory Press’s biorxiv and medrxiv.
Thanks to new funding from the Chan Zuckerberg Initiative, the DocMaps Implementation Group has expanded to develop a Software Development Kit (SDK) for creating and consuming DocMaps, support for more diverse types of evaluation processes, comprehensive documentation, mappings to common community standards and vocabularies, and support for anyone interested in using DocMaps in their projects.
See an example of DocMaps in action with our Demo Visualization Tool by visualizing the review process of any preprint deposited to Crossref.
We’re looking for other groups interested in implementing DocMaps: publishers of reviews, developers of publishing or reviewing platforms that are interested in generating DocMaps, and aggregation or database services that are interested in collecting or displaying them.
In addition to being able to offer technical support, we can also offer modest direct development funding for groups that need financial support to invest in DocMaps.
Editorial practices (ie, the processes, checks, and transformations that journals and publishing platforms apply to manuscripts, such as peer review, ethics checks, certification such as journal acceptance, etc) are highly heterogeneous, and will become even more so as scholarly publishing is disrupted by new innovations, the open science movement, and the removal of barriers to entry. Multiple initiatives to develop models describing peer review practices have emerged, including Transpose, Peer Review Transparency, Review Maps, and an STM Association working group.
These models are a positive development, but they are often narrowly focused on the needs of their creators, and as such do not fully accommodate the needs of readers, funders, and the scholarly publishing ecosystem as a whole. In particular, these efforts do not focus on representing editorial practices in ways that can be reliably aggregated, surfaced, and queried. Moreover, these efforts are often limited to traditional peer review processes, and do not capture the full range of editorial practices and events needed to accommodate a Publish-Review-Curate world where reviews can be conducted by multiple parties. To support this world, the community needs a machine-readable, discoverable, and extensible framework for representing and surfacing object-level review/editorial events.
View our Co-Creation Community introductory webinar to learn more: