Links to specific topics

(See also under "Labels" at the bottom-left area of this blog)
[ Welcome post ] [ Installation issues ] [ WarpPLS.com ] [ Posts with YouTube links ] [ Model-driven data analytics ] [ PLS-SEM email list ]

Monday, January 23, 2012

Version 3.0 of WarpPLS is coming soon, with several new features

Version 3.0 of WarpPLS is currently undergoing a battery of tests, and will be made available soon. Among the new features is the calculation of indirect and total effects, which are exemplified in this health data analysis post based on the China Study II dataset. Here is a comprehensive list of new features in this version:

    - Addition of latent variables as indicators. Users now have the option of adding latent variable scores to the set of standardized indicators used in an SEM analysis. This option is useful in the removal of outliers, through the use of restricted ranges for latent variable scores, particularly for outliers that are clearly visible on the plots depicting associations among latent variables. This option is also useful in hierarchical analysis, where users define second-order (and higher order) latent variables, and then conduct analyses with different models including latent variables of different orders.

    - Blindfolding. Users now have the option of using a third resampling algorithm, namely blindfolding, in addition to bootstrapping and jackknifing. Blindfolding is a resampling algorithm that creates a number of resamples (a number that can be selected by the user), where each resample has a certain number of rows replaced with the means of the respective columns. The number of rows modified in this way in each resample equals the sample size divided by the number of resamples. For example, if the sample size is 200 and the number of resamples selected is 100, then each resample will have 2 rows modified. If a user chooses a number of resamples that is greater than the sample size, the number of resamples is automatically set to the sample size (as with jackknifing).

    - Effect sizes. Cohen’s (1988) f-squared effect size coefficients are now calculated and shown for all path coefficients. These are calculated as the absolute values of the individual contributions of the corresponding predictor latent variables to the R-square coefficients of the criterion latent variable in each latent variable block. With these effect sizes users can ascertain whether the effects indicated by path coefficients are small, medium, or large. The values usually recommended are 0.02, 0.15, and 0.35; respectively (Cohen, 1988). Values below 0.02 suggest effects that are too weak to be considered relevant from a practical point of view, even when the corresponding P values are statistically significant; a situation that may occur with large sample sizes.

    - Full collinearity VIFs. VIFs are now shown for all latent variables, separately from the VIFs calculated for predictor latent variables in individual latent variable blocks. These new VIFs are calculated based on a full collinearity test, which identifies not only vertical but also lateral collinearity, and allows for a test of collinearity involving all latent variables in a model. Vertical, or classic, collinearity is predictor-predictor latent variable collinearity in individual blocks. Lateral collinearity is a new term that refers to predictor-criterion latent variable collinearity; a type of collinearity that can lead to particularly misleading results. Full collinearity VIFs can also be used for common method (Lindell & Whitney, 2001) bias tests that are more conservative than, and arguably superior to, the traditionally used tests relying on exploratory factor analyses.

    - Incremental code optimization. At several points the code was optimized for speed, which led to incremental gains even as a significant number of new features were added. Several of these new features required new and complex calculations, mostly to generate coefficients that were not available before.

    - Indirect and total effects. Indirect and total effects are now calculated and shown, together with the corresponding P values, standard errors, and effect sizes. The calculation of indirect and total effects can be critical in the evaluation of downstream effects of latent variables that are mediated by other latent variables, especially in complex models with multiple mediating effects in concurrent paths. Indirect effects also allow for direct estimations, via resampling, of the P values associated with mediating effects that have traditionally relied on time-consuming and not fully automated calculations based on linear (Preacher & Hayes, 2004) and nonlinear (Hayes & Preacher, 2010) assumptions.

    - P values for all weights and loadings. P values are now shown for all weights and loadings, including those associated with indicators that make up moderating variables. With these P values, users can check whether moderating latent variables satisfy validity and reliability criteria for either reflective or formative measurement. This can help users demonstrate validity and reliability in hierarchical analyses involving moderating effects, where double, triple etc. moderating effects are tested. For instance, moderating latent variables can be created, added to the model as standardized indicators, and then their effects modeled as being moderated by other latent variables; an example of double moderation.

    - Predictive validity. Stone-Geisser Q-squared coefficients (Geisser, 1974; Stone, 1974) are now calculated and shown for each endogenous variable in an SEM model. The Q-squared coefficient is a nonparametric measure traditionally calculated via blindfolding. It is used for the assessment of the predictive validity (or relevance) associated with each latent variable block in the model, through the endogenous latent variable that is the criterion variable in the block. Sometimes referred to as a resampling analog of the R-squared, it is often similar in value to that measure; even though, unlike the R-squared coefficient, the Q-squared coefficient can assume negative values. Acceptable predictive validity in connection with an endogenous latent variable is suggested by a Q-squared coefficient greater than zero.

    - Ranked data. Users can now select an option to conduct their analyses with only ranked data, whereby all the data is automatically ranked prior to the SEM analysis (the original data is retained in unranked format). When data is ranked, typically the value distances that typify outliers are significantly reduced, effectively eliminating outliers without any decrease in sample size. A concomitant increase in collinearity is usually observed, but not to the point of threatening the credibility of the results. This option can be very useful in assessments of whether the presence of outliers significantly affects path coefficients and respective P values, especially when outliers are not believed to be due to measurement error.

    - Restricted ranges. Users can now run their analyses with subsamples defined by a range restriction variable, which may be standardized or unstandardized. This option is useful in multi-group analyses, whereby separate analyses are conducted for each subsample and the results then compared with one another. One example would be a multi-country analysis, with each country being treated as a subsample, but without separate datasets for each country having to be provided as inputs. This range restriction feature is also useful in situations where outliers are causing instability in a resample set, which can lead to abnormally high standard errors and thus inflated P values. Users can remove outliers by restricting the values assumed by a variable to a range that excludes the outliers, without having to modify and re-read a dataset.

    - Standard errors for all weights and loadings. Standard errors are now shown for all loadings and weights. Among other purposes, these standard errors can be used in multi-group analyses, with the same model but different subsamples. In these cases, users may want to compare the measurement models to ascertain equivalence, using a multi-group comparison technique such as the one documented by Keil et al. (2000), and thus ensure that any observed differences in structural model coefficients are not due to measurement model differences.

    - VIFs for all indicators. VIFs are now shown for all indicators, including those associated with moderating latent variables. With these VIFs, users can check whether moderating latent variables satisfy criteria for formative measurement, in case they do not satisfy validity and reliability criteria for reflective measurement. This can be particularly helpful in hierarchical analyses involving moderating effects, where formative latent variables are frequently employed, including cases where double, triple etc. moderating effects are tested. Here moderating latent variables can be created, added to the model as standardized indicators, and then their effects modeled as being moderated by other latent variables; with this process being repeated at different levels.

References

Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Lawrence Erlbaum.

Geisser, S. (1974). A predictive approach to the random effects model. Biometrika, 61(1), 101-107.

Hayes, A. F., & Preacher, K. J. (2010). Quantifying and testing indirect effects in simple mediation models when the constituent paths are nonlinear. Multivariate Behavioral Research, 45(4), 627-660.

Keil, M., Tan, B.C., Wei, K.-K., Saarinen, T., Tuunainen, V., & Wassenaar, A. (2000). A cross-cultural study on escalation of commitment behavior in software projects. MIS Quarterly, 24(2), 299–325.

Lindell, M., & Whitney, D. (2001). Accounting for common method variance in cross-sectional research designs. Journal of Applied Psychology, 86(1), 114-121.

Preacher, K.J., & Hayes, A.F. (2004). SPSS and SAS procedures for estimating indirect effects in simple mediation models. Behavior Research Methods, Instruments, & Computers, 36 (4), 717-731.

Stone, M. (1974). Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society, Series B, 36(1), 111–147.

No comments: