Links to specific topics

(See also under "Labels" at the bottom-left area of this blog)
[ Welcome post ] [ Installation issues ] [ WarpPLS.com ] [ Posts with YouTube links ] [ Model-driven data analytics ] [ PLS-SEM email list ]

Saturday, July 11, 2015

The formative-reflective measurement dichotomy


I have been asked several times in the past about the formative-reflective measurement dichotomy, and whether formative measurement should be used at all. Recently there seems to be an emerging belief shared among various methodological researchers that formative measurement should not be used, under any circumstances. My view on the issue is not as extreme, and is summarized through the following text, adapted from the article listed below (whose full text is linked).

Kock, N., & Mayfield, M. (2015). PLS-based SEM algorithms: The good neighbor assumption, collinearity, and nonlinearity. Information Management and Business Review, 7(2), 113-130.

The formative-reflective measurement dichotomy is intimately related to a characteristic shared by the PLS-based SEM algorithms discussed here. These algorithms generate approximations of factors via exact linear combinations of indicators, without explicitly modeling measurement error. Recently new PLS-based SEM algorithms have been proposed that explicitly model measurement error. These new algorithms suggest that formative and reflective latent variables may be conceptually the same, but at the ends of a reliability scale, where reliability can be measured through various coefficients (e.g., Dijkstra's consistent PLS reliability, and the Cronbach’s alpha coefficient).

That is, a properly designed formative latent variable would typically have a lower reliability than a properly designed reflective latent variable. Nevertheless, both reliabilities would have to satisfy the same criterion – be above a certain threshold (e.g., .7). While reflective latent variables can achieve high reliabilities with few indicators (e.g., 3), formative latent variables require more indicators (e.g., 10). This mathematical property is in fact consistent with formative measurement theory, where many different facets of the same construct should be measured so that the corresponding formative latent variable can be seen as a complete depiction of the underlying formative construct.

Future research opportunities stem from the above discussion, leading to important methodological questions. What is the best measure of reliability to be used? It is possible that the composite reliability coefficient is a better choice than the Cronbach’s alpha coefficient, under certain circumstances. Will the new PLS-based SEM algorithms that explicitly model measurement error (i.e., factor-based PLS algorithms, a.k.a. PLSF algorithms) obviate the need for the classic composite-based algorithms, or will the new algorithms have a more limited scope of applicability? Will formative measurement be re-conceptualized as being at the low end of a reliability scale that also includes reflective measurement, providing a unified view of what could be seen as an artificial dichotomy? These and other related methodological questions give a glimpse of the exciting future of PLS-based SEM.

No comments: