Reporting Instrument Guide

Evelyn Perloff, Ph.D., Behavioral Measurement Database Services, Inc. Pittsburgh, PA
Fred B. Bryant, Ph.D. Department of Psychology Loyola University Chicago

 

Adopted and Adapted Research Instruments

Download Article in PDF format

Problems with reporting the development of new Psychology Research Instruments

A. What is the new instrument intended to measure?

One of the most serious problems in measurement reporting in health research is the frequent failure of developers of new instruments to explain the purpose of their instrument. This serious oversight makes it impossible to know exactly what a given instrument is intended to measure. Without a clear, precise definition of the target construct that an instrument is designed to measure, there is no way for subsequent researchers to know what it is that the instrument assesses6. Yet, health researchers often use a measurement instrument whose purpose they are unable to explain. Clearly, this lack of clarity and specificity in defining the focal constructs being measured compromises the validity and value of scientific research. If readers of an article reporting a newly developed instrument are unable to determine precisely what it is that the new instrument is meant to measure, then the article is not science. As Ziegler6 has argued, the definition of a construct should not only include an explanation of its conceptual components and behavioral manifestations but should also explain its relation to other constructs.

Every health researcher who reports the development of a new measurement instrument should be required to explain clearly and precisely the exact concept or concepts that their instrument is designed to assess. Peer-reviewed health journals should not publish an article reporting the development of a new measurement instrument unless the authors have met this essential requirement. And yet, every year countless health research articles are published reporting the development of a new instrument whose exact purpose is never explicitly specified. This reporting problem must be corrected if we are to increase the conceptual precision of research measurement and optimize progress in the health sciences.

B. What is the name of the new measurement instrument?

The second problem in measurement reporting in health research is the widespread failure of originators of new measures to title their instruments. Not titling an instrument makes it impossible to track its use and to manage information about it. The lack of a formal title for an instrument also makes it difficult or impossible for other researchers to be sure that they are using the same measures as prior investigators.

How does one find and keep track of other research studies that have used a particular instrument, if the instrument in question has no name or title? Imagine physicians trying to prescribe the proper medication for a particular medical condition when the medication they are seeking has no official name. How could physicians ever be sure their patients receive the correct drug rather than some other medication that seems similar or identical? Clearly, untitled measurement instruments impair the ability of future scientists both to replicate research using these measures, as well as to conduct meta-analyses of the psychometric properties of these instruments.

Problems with reporting the adaptation of preexisting instruments

A. When researchers report using an ‘adapted’ version of a pre-existing measurement instrument, which specific instrument have they modified, who is the author of this original instrument, and what is its original citation?

Clearly, when health researchers have modified a preexisting measure, they should specify the name and author(s) of the original instrument and provide its original citation. All too often, however, researchers either fail to report all of this information or report inaccurate information. This problematic reporting practice makes it difficult or impossible for later researchers to determine the origins of the ‘adapted’ instrument or to compare the ‘adapted’ and original forms of the measurement instrument.

B. When researchers report using an ‘adapted’ version of a pre-existing measurement instrument, what specific changes have they made in ‘adapting’ the instrument?

It is common practice in many health journals for researchers to modify a pre-existing instrument to suit the needs of their research, without reporting the specific nature of these modifications. In their research articles, investigators often simply state that they ‘adapted,’ ‘revised,’ or ‘modified’ an instrument for use in their study—but they do not always explain the precise ways in which they altered the preexisting measure. This problematic reporting practice leaves unspecified the specific measurements employed, making it impossible for future researchers to replicate the measures used in such studies. As noted earlier, if the methods of research cannot be replicated, then the research is not science. Unfortunately, this unsound reporting practice is rampant in peer-reviewed health research journals. Researchers who modify a pre-existing instrument should explicitly clarify the changes they have made to the instrument, and why these changes were deemed necessary.

C. When researchers report using ‘selected items’ from a preexisting measurement instrument, what specific items did these researchers administer and analyze?

In their research articles, health researchers often report using only a subset of the full battery of items from a larger, pre-existing instrument. However, these researchers do not always report the specific items that they used. Clearly, this reporting practice makes it impossible for future researchers to put in place or analyze the same measures used in the earlier study. Once more we note that any field of empirical inquiry that uses non-reproducible methods is not science. Investigators who report using ‘selected items’ from a preexisting instrument in a research study should explicitly clarify the specific items they administered and analyzed, in order to enhance the ability of future health researchers to replicate their measurements.

D. When researchers report using ‘selected items’ from a pre-existing instrument, on what basis did they decide to administer or analyze only a subset of the original items?

When a health researcher reports using ‘selected items’ from a larger, pre-existing instrument in a research article, it is important for future researchers who wish to replicate this earlier study to know whether the original researcher: (a) selected a subset of items a priori at the outset of the study and administered only these selected items to the sample; or (b) administered the full battery of items from the pre-existing instrument and then selected a subset of items to analyze a posteriori after collecting the data. Not knowing which of these two procedures original researchers adopted in ‘using’ selected items makes it impossible for later researchers to be certain they are administering the same measurement tools as in past research.

When health researchers report selecting a subset of items to analyze a posteriori after administering the full battery of items from the pre-existing instrument, there is another way in which the replicability of results can be compromised. If, on the one hand, researchers have used ‘data mining’ methods to identify a subset of the full battery of items that produce desired statistical results, then this approach is prone to capitalize on chance and will thus have limited cross-sample generalisability.7If, on the other hand, researchers have selected the specific subset of items to analyze on theoretical grounds before analyzing the data, then this approach is less likely to capitalize on chance and should have greater cross-sample generalisability. For these reasons, researchers who have administered the full battery of items from a pre-existing instrument and then selected a subset of items to analyze after collecting the data should be required to report the basis on which they selected the subset of items they have analyzed.

E. When researchers report using ‘selected’ items from a larger, pre-existing instrument, what is the name of the newly condensed instrument containing only the selected subset of items that were administered or analyzed?

As noted above in relation to the development of new instruments, health researchers who modify a pre-existing instrument should give a unique identifying title to this newly modified measure. Otherwise, it will be impossible to track the use of this modified version of the instrument and to manage information about it. The lack of a unique title for a modified instrument also makes it difficult or impossible for other researchers to be sure that they are using the same measures as prior investigators who report using this modified measure. When multiple untitled modifications of an instrument exist, investigators cannot be sure they are using the particular modified version of the instrument that others have used.

F. Who is the author of a modified instrument? What degree of modification in the original instrument is necessary, in order for the researcher making these modifications, rather than the developer of the original instrument, to be named as the author of the modified instrument?

The revisions made to a pre-existing instrument may by minor (e.g. changing a single word in one item), major (e.g. rewording all items to be appropriate for use with young children or in relation to a particular medical disorder), or somewhere between these two extremes (e.g. changing the instructions to focus respondents on the past week as opposed to life in general). Sometimes health researchers extract and administer only a single item or a single subscale of a pre-existing instrument, or they omit certain items to shorten a pre-existing instrument. Other times researchers pick and choose selected items or subscales from several preexisting instruments, in order to create a ‘hybrid’ composite measure that assesses a wider range of constructs or dimensions than previously available. In yet other instances, researchers adapt a pre-existing paper-and-pencil instrument for data collection by other media, such as the Internet, a smart-phone app, telephone, or a face-to-face interview. We note that any form of modification to a measurement instrument produces a revised measure.

Surprisingly, there are currently no formal guidelines concerning how to assign authorship for such modified instruments in health research. And there is no general consensus regarding how extensive these revisions must be, in order for the researcher making the modifications to be credited with authorship of the modified instrument. Although most people would probably agree that a researcher who simply omits one word from a pre-existing instrument should not be cited as the author of the modified measure, no rules currently exist for deciding when the developer of the original measure, as opposed to the researcher making modifications to it, should be given authorship of a modified measure.

Guidelines for reporting the development of new measurement instrument and the modification of pre-existing measurement instruments

To change these problematic reporting practices so as to enhance the replicability of empirical research and the management of measurement information, we propose the following set of guidelines for reporting measurement in health research—guidelines that authors should meet before their research articles can be published in peer-reviewed health journals (see Appendix A). We offer these guidelines for use not only by authors of research reports, but also by journal reviewers and editors who evaluate such reports for publication in the course of the peer-review process. Adopting these guidelines would serve to enhance the validity, replicability, and utility of research in public health.

Guidelines for Reporting the Development of New Instruments

  1. Researchers who report the development of a new instrument should clearly and explicitly explain exactly what the new instrument is designed to measure.
  2. Researchers who report the development of a new instrument should give the new instrument a unique title.

Guidelines for Reporting the Modification of Pre-existing Instruments

  1. Researchers who report using an ‘adapted’ version of a pre-existing instrument should report the title, author(s), and correct citation of the original instrument.
  2. Researchers who report using an ‘adapted’ version of a pre-existing instrument should explicitly specify the changes they have made to the original instrument.
  3. Researchers who report using ‘selected items’ from a pre-existing instrument should report the specific items they have administered and analysed.
  4. Researchers who report using ‘selected items’ from a pre-existing instrument should explain the basis on which they decided to administer or analyse only a subset of the original items.
  5. Researchers who ‘adapt’ a pre-existing instrument should give the newly modified instrument a unique title.
  6. Researchers who ‘adapt’ a pre-existing instrument should clarify who the author of the newly modified instrument is and should specify this information in their research reports using the modified instrument.

Author Statements

Ethical approval

Because we did not collect primary data for this manuscript, our work on this manuscript was exempt from IRB approval.

Funding

As authors, we also note that the work reported in our manuscript was not funded by a grant agency or private foundation.

Competing interests

We have no conflicts of interest to declare in relation to the work reported in this manuscript.

References

  1. U.S. Department of Health and Human Services. Guidelines for the conduct of research within the public health service (January 1, 1992). Washington, DC: U.S. Department of Health and Human Services, Public Health Service, Office of the Assistant Secretary for Health; 1992.
  2. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbrouck JP. Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. Bull World Health Org 2007;85:867e72.
  3. Schulz KF, Altman DG, Moher D. The 2010 CONSORT statement: updated guidelines for reporting parallel group randomised trials. BMC Med 2010;8:1e9.
  4. Staquet M, Berzon R, Osoba D, Machin D. Guidelines for reporting results of quality of life assessments in clinical trials. Qual Life Res 1996;5:496e502.
  5. APA Publications and Communications Board Working Group on Journal Article Reporting Standards. Reporting standards for research in psychology: why do we need them? what might they be? Am Psych 2008;63:839e51.
  6. Ziegler M. Stop and state your intentions! Let’snot forget the ABC of test construction. Eur J Psych Assess 2014;30:239e42.
  7. Sijtsma K. Playing with datador how to discourage questionable research practices and stimulate researchers to do things right. Psychometrika 2016;81:1e15.

Submit your Instrument today!

Subscribe to our Newsletter Today

Stay up to date! Newsletters sent out quarterly.

Copyright © 2024 BMDS |  All Rights Reserved

Design: LDS