Skip to main content

About this Research Topic

Abstract Submission Deadline 22 January 2023
Manuscript Submission Deadline 22 May 2023

Validity of scores is the ultimate goal of measurement. The concept of measurement validity is severely challenged for methodological and statistical decisions during the assessment and analysis process. Thus, the present special issue is proposed as a means to provide new knowledge and ideas related to measurement. The proposed issue is designed to address issues of measurement reliability and validity, new advances in the measurement of these concepts and the methodologies involved to attain them. The use of new software and routines will target at making accessible these concepts so that the papers will be easily accessible by a broad audience of researchers in the social sciences.

The goal of the present proposal is to provide new knowledge in psychometrics. Topics of interest are the following but are not limited to those. Any work related to improving measurement is potentially of interest to this issue. Examples include confirmatory factor analysis, item response theory, measurement invariance, Bayesian applications of measurement, reliability in longitudinal designs, invariance in longitudinal and cross-sectional designs, parametric and non-parametric extension of item response models, analysis of response times, analyses of aberrant behavior, systematic measurement error, the roles of personal characteristics in measurement, scaling systems and evaluation, item position effects, analysis of distractors, detection of cheating and guessing, nested model applications of the above concepts, latent class and latent profile models, cognitive diagnostic models, classification models, computerized adaptive testing, multistage testing, cross-validation, simulations of all the above, power analyses, extension of descriptive fit indices, analysis of response vectors, etc. Applications of the above concepts and use of specific software routines is especially encouraged.

The purpose of the present topic is to provide new practical knowledge in psychometrics that can be used by applied researchers in the social sciences. The proposal includes theoretical pieces, applications and simulation studies. It is strongly encouraged that application works include details and annotated examples of popular software employed in their studies that can lead to replication and extension. Topics include but are not limited to: confirmatory factor analysis, item response theory, measurement invariance, Bayesian applications of measurement, reliability in longitudinal designs, invariance in longitudinal and cross-sectional designs, parametric and non-parametric extension of item response models, analysis of response times, analyses of aberrant behavior, systematic measurement error, the roles of personal characteristics in measurement, scaling systems and evaluation, item position effects, analysis of distractors, detection of cheating and guessing, nested model applications of the above concepts, latent class and latent profile models, cognitive diagnostic models, classification models, computerized adaptive testing, multistage testing, cross-validation, simulations of all the above, power analyses, extension of descriptive fit indices, analysis of response vectors, etc.

Keywords: Psychometrics, measurement invariance, Bayesian modeling, Nonparametric IRT, Analysis of Response Vectors.


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Validity of scores is the ultimate goal of measurement. The concept of measurement validity is severely challenged for methodological and statistical decisions during the assessment and analysis process. Thus, the present special issue is proposed as a means to provide new knowledge and ideas related to measurement. The proposed issue is designed to address issues of measurement reliability and validity, new advances in the measurement of these concepts and the methodologies involved to attain them. The use of new software and routines will target at making accessible these concepts so that the papers will be easily accessible by a broad audience of researchers in the social sciences.

The goal of the present proposal is to provide new knowledge in psychometrics. Topics of interest are the following but are not limited to those. Any work related to improving measurement is potentially of interest to this issue. Examples include confirmatory factor analysis, item response theory, measurement invariance, Bayesian applications of measurement, reliability in longitudinal designs, invariance in longitudinal and cross-sectional designs, parametric and non-parametric extension of item response models, analysis of response times, analyses of aberrant behavior, systematic measurement error, the roles of personal characteristics in measurement, scaling systems and evaluation, item position effects, analysis of distractors, detection of cheating and guessing, nested model applications of the above concepts, latent class and latent profile models, cognitive diagnostic models, classification models, computerized adaptive testing, multistage testing, cross-validation, simulations of all the above, power analyses, extension of descriptive fit indices, analysis of response vectors, etc. Applications of the above concepts and use of specific software routines is especially encouraged.

The purpose of the present topic is to provide new practical knowledge in psychometrics that can be used by applied researchers in the social sciences. The proposal includes theoretical pieces, applications and simulation studies. It is strongly encouraged that application works include details and annotated examples of popular software employed in their studies that can lead to replication and extension. Topics include but are not limited to: confirmatory factor analysis, item response theory, measurement invariance, Bayesian applications of measurement, reliability in longitudinal designs, invariance in longitudinal and cross-sectional designs, parametric and non-parametric extension of item response models, analysis of response times, analyses of aberrant behavior, systematic measurement error, the roles of personal characteristics in measurement, scaling systems and evaluation, item position effects, analysis of distractors, detection of cheating and guessing, nested model applications of the above concepts, latent class and latent profile models, cognitive diagnostic models, classification models, computerized adaptive testing, multistage testing, cross-validation, simulations of all the above, power analyses, extension of descriptive fit indices, analysis of response vectors, etc.

Keywords: Psychometrics, measurement invariance, Bayesian modeling, Nonparametric IRT, Analysis of Response Vectors.


Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Topic Editors

Loading..

Topic Coordinators

Loading..

Articles

Sort by:

Loading..

Authors

Loading..

views

total views views downloads topic views

}
 
Top countries
Top referring sites
Loading..

Share on

About Frontiers Research Topics

With their unique mixes of varied contributions from Original Research to Review Articles, Research Topics unify the most influential researchers, the latest key findings and historical advances in a hot research area! Find out more on how to host your own Frontiers Research Topic or contribute to one as an author.