• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 



• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 

The Practical Side of Measurement

Continued article from the The Behavioral Measurement Letter, Vol. 6, No.1 Winter 1999


Ora L. Strickland


Using instruments that are reliable and valid to measure concepts of interest is a central concern of researchers. It is a well-accepted fact that one can not study well what can not be measured well. Therefore, a lot of time and energy are invested in developing instruments that seem to measure what they purport to measure. However, a reality of measurement that must never be forgotten is that instruments that have been reliable and valid in one situation may not function well under other circumstances. In the course of selecting or developing questionnaires or designing the measurement component of studies, one may become so bogged down with selecting instruments that have been previously shown to be psychometrically sound that practical aspects of measurement may be forgotten. When the practical side of measurement is not given adequate attention, data that result can be compromised. Measurement instruments need to be practical for respondents or subjects, and for the researcher as well. The more practical a measurement instrument or protocol is for the subject and the researcher, the more likely data generated will be reliable and valid.

Instruments are practical for subjects when they are appropriate for the population studied, easy to understand, simple to complete, and not too demanding of energy and time. When subjects are children, ill or frail, or have low literacy, the challenges of measurement become more pronounced. For example, it may be difficult to find an instrument to measure particular variables in children. When such an instrument is identified, care needs to be taken to assure that it is usable for the age group of children that the investigator will be studying. Since the reading and oral comprehension levels of children vary so much when there is even a couple of years difference in age, an instrument may be useful for one age group but not for another. Although available instruments should be used when they exist, it may be necessary to revise an instrument to make it usable for a specific age group.

When subjects are quite ill, a long and cumber-some instrument may be too taxing for them, and thus its use with such subjects may result in questionable data. In this case, the investigator should consider using a shortened version of the instrument and consider administering it as an interview. If a shorter version of the instrument is not available, it may be worth the time required to shorten it. It also may be necessary to use a proxy respondent (such as a parent or caretaker) who is intimately familiar with and aware of the subject’s views and situation. However, proxy respondents should be used only when absolutely necessary because there is no real substitute for the subject. When proxy respondents are used, they should be explicitly told that they need to answer based on how the subject would respond and not based on their own responses. When study results are reported, the use of proxy respondents should be noted, of course. If, however, an instrument addresses the variable(s) of interest and is designed to be used by raters or observers, it may be a good alternative to the use of proxies.

Aging or disabled subjects also offer a measurement challenge. When questionnaires are used with older subjects, they should be printed in a large typeface so that items will be easier to see. However, if the sample includes subjects who are blind, have arthritis, or have difficulty using their hands for writing, questionnaires should be administered via interview. When questionnaires are mailed to older subjects or subjects who lack sufficient literacy, it is useful to interview the subjects by phone, or at least a sample thereof, to serve as a reliability check. The phone interviewer can have the subject simultaneously read the mailed questionnaire. Having respondents read their mailed questionnaire as they are being interviewed by phone also improves the response rate.

The total measurement protocol for each data collection point also needs to be considered. I have found this to be of concern even in healthy subjects. When too much time and energy are required of subjects to complete data collection, subject recruitment will be negatively affected and the subject dropout rate will increase. In this regard, one should consider the following: Will the combination of data collection methods and instruments employed be so demanding that subjects could become so fatigued that data will be compromised? How much time and energy will subjects have to expend in each data collection session? Do subjects have physical or mental limitations that compromise the usefulness of specific instruments or data collection approaches? Sometimes it may be necessary to break a data collection point into two or more sessions in order to keep from overly fatiguing subjects. Also, using a variety of data collection approaches can break up the monotony of a data collection session and help prevent subject fatigue.

Instruments are practical for researchers when they are not only available but accessible, easy to administer and score, and not too demanding of time and other resources. It is important to remember that the costs associated with a measurement protocol not only include the financial cost of purchasing the instrument, but also costs for administration and scoring. If it is impractical to interview all subjects due to cost constraints, then a representative sample may be interviewed as a reliability check. If special training is required to administer or score the instrument, the time expended and costs associated with training data collectors or scorers are added expenses. Instruments that require computers for administration and scoring may be more financially costly but less costly as far as time expended in performing these tasks. Other factors that can increase the cost of instruments include needs for special equipment or a special setting for administration.

The practical side of measurement requires striking the right balance between measurement principles and practices and common sense. Highly complicated measurement protocols that are not practical to subjects or investigators are not useful in the long run.


Ora L. Strickland, PhD, RN, FAAN, is Professor and the first to occupy the Independence Foundation Endowed Research Chair at the Nell Hodgson Woodruff School of Nursing at Emory University. She is founding Editor of the Journal of Nursing Measurement, on the editorial board/review panel of Advances in Nursing Science, Research in Nursing and Health, Scholarly Inquiry for Nursing Practice: An International Journal, and Nursing Leadership Forum. She co-edited the four-volume Measurement of Nursing Outcomes, has written or contributed to 18 books, and received two awards in health journalism (1998) for her column in the Baltimore Sun, “Nurses Station,” and seven American Journal of Nursing Book of the Year awards. She was the youngest person ever elected to the American Academy of Nursing.



There are three items of interest relevant to BMDS and HaPI to report here – a letter to be published in the Archives of Family Medicine, a presentation to be made at the 1999 APA Convention, and an article in the quarterly magazine Eye on Psi Chi.


Letter on Identifying Instruments for Use in Family Practice

A letter written by Drs. Stephen Zyzanski, Professor of Family Medicine at Case Western Reserve University, and EvelynMT Perloff, Director of BMDS, on identifying instruments for measuring behavioral variables found in family practice was accepted for publication by the Archives of Family Medicine. The letter was produced in response to an article published in the July/August 1998 issue of the Archives that emphasized the important roles of psychosocial factors in health and illness, but discussed a relatively small number of factors and instruments used to measure them. In their letter, Zyzanski and Perloff allude to the large number and broad spectrum of behavioral and psychosocial factors relevant to health, and point to HaPI as an excellent source for identifying and learning about instruments to assess these factors in clinical practice.


Presentation on Using HaPI as an Instructional Tool at 1999 APA Convention

BMDS staff will be presenting an interactive demonstration on the use of HaPI as an instructional tool on August 20 and 21, 1999 at the American Psychological Association’s Annual Convention, being held this year in Boston. The presentation, tentatively titled “Using the Health and Psychosocial Instruments (HaPI) Database to Select Measures for Test Validation,” is part of the APA’s Miniconvention on Education and Technology. The interactive demonstration Mwill employ exercises in which students (in this case APA members) use the HaPI database to identify, learn about, and compare and contrast instruments to measure “grief,” “perceived control,” and “empathy.” The presenters will be Fred B. Bryant, Rebecca Guilbault, and Evelyn Perloff.


Article on Measurement, Instruments, and HaPI in Eye on Psi Chi

Daniel Moore, MA, Fred Bryant, PhD, and Evelyn Perloff, PhD, wrote an article titled “Measurement Instruments at Your Fingertips” that was published in the Winter 1999 issue of Eye on Psi Chi, the quarterly magazine of Psi Chi, the National Honor Society in Psychology. The article emphasizes the importance and ubiquity of measurement, measurement instruments, and means to identify appropriate measurement instruments, including HaPI, in the context of an undergraduate psychology class. The “teaser” on the magazine’s front cover humorously summarized the article’s content in two short sentences: “Looking for the right instrument? Don’t worry, be HaPI!”


Read additional articles from this newsletter:

Addiction and Gambling Disorders: On Matters of Measurement and Validity

Measurement Modeling: Identifying the Constructs Underlying the Center for Epidemiologic Studies Depression Scale (CES-D)



Subscribe to our Newsletter Today

Stay up to date! Newsletters sent out quarterly.

Copyright © 2024 BMDS |  All Rights Reserved

Design: LDS