Vicky Steeves & Nick Wolf | February 16, 2018
Vicky's ORCID: 0000-0003-4298-168X | Nick's ORCID: 0000-0001-5512-6151
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
When choosing which metrics you want to track, you have to consider what is being measured, and the strength of the evidence for impact.
Ask yourself: what am I trying to prove?
Citation Counts
The pure number of times that your data or publication has been cited.
Confidence: mid-to-low; not a true measure because papers/datasets may be cited for reasons other than acknowledging influence (refutation, etc.)
H-Index
A measure of impact derived from citation counts: researchers have an index h if exactly h of their published papers have been cited h or more times.
Confidence: high; have to produce highly cited papers in quantity to score well.
Page Views
The number of times the publication and/or dataset catalog page has been viewed.
Confidence: mid; indicates of the level of interest in the dataset, and the level of awareness of its existence.
Download Counts
The number of times the publication and/or dataset catalog page has been downloaded.
Confidence: mid-high; indicates a strong level of interest in the data, but the metric doesn't reveal how the downloaded data was used.
Journal Impact Factor (JIF)
Gauges impact of a journal in a given year, defined as the number of citations received by papers published in the journal the preceding two years. Officially calculated by Thomson Reuters.
Confidence: low; it’s often used as a proxy measure for other things.
Post Publication Peer Review
A method of quality control where articles/datasets are reviewed after publication.
Confidence: low; more concerned with quality control than impact, but the nature of these reviews may reveal evidence of impact.
Bibliographic Managers
Services such as Mendeley, Zotero, CiteULike, and BibSonomy allow users to record online resources for their own reference or to recommend them to others.
Confidence: mid-to-low; rec feature weighs more heavily than simply adding to a bibliography.
Social Media Links
When a dataset or publication is mentioned on social media such as Twitter, blogs, Facebook ,etc.
Confidence: mid-to-low; the tone of the post is important but if people want to share with the WWW, it’s likely the research has had an impact on them.
an article-centered service which monitors sources for mentions of scholarly articles
then computes its findings into a score to indicate quality and quantity of attention
Get a researcher profile ID
Place your data with repositories that track use (at least downloads, preferably citations)
Publish with journals and presses that deploy metric-enabling metadata
Open Researcher & Contributor ID
Do you have one? No? Let’s get you an ORCID.org!
When you publish, you should make the underlying data available in a repository that issues DOIs! You then link that DOI in your "Supplementary Materials" section!
This means that anyone who wants to use your data must go to this repository, download it, and cite their use if they publish using it!
You can also submit your data to a peer-reviewed data journal! These all issue DOIs and will associate your data with the corresponding publication.
Examples:
**here is a list of data journals**
Publications:
Data:
Email us: vicky.steeves@nyu.edu | nicholas.wolf@nyu.edu
Learn more about RDM: guides.nyu.edu/data_management
Get this presentation: guides.nyu.edu/data_management/resources
Make an appointment: guides.nyu.edu/appointment
Vicky's ORCID: 0000-0003-4298-168X | Nick's ORCID: 0000-0001-5512-6151
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.