Links to specific topics

Tuesday, September 13, 2016

Measurement invariance assessment in PLS-SEM


WarpPLS users can assess measurement invariance in PLS-SEM analyses in a way analogous to a multi-group analysis. That is, WarpPLS users can compare pairs of measurement models to ascertain equivalence, using one of the multi-group comparison techniques building on the pooled and Satterthwaite standard error methods discussed in the article below. By doing so, they will ensure that any observed between-group differences in structural model coefficients, particularly in path coefficients, are not due to measurement model differences.

Kock, N. (2014). Advanced mediating effects tests, multi-group analyses, and measurement model assessments in PLS-based SEM. International Journal of e-Collaboration, 10(3), 1-13.

For measurement invariance assessment, the techniques discussed in the article should be employed with weights and/or loadings. While with path coefficients researchers may be interested in finding statistically significant differences, with weights/loadings the opposite is typically the case – they will want to ensure that differences are not statistically significant. The reason is that significant differences between path coefficients can be artificially induced by significant differences between weights/loadings in different models.

A spreadsheet with formulas for conducting a multi-group analysis building on the pooled and Satterthwaite standard error methods is available from WarpPLS.com, under “Resources”. As indicated in the article linked above, this same spreadsheet can be used in the assessment of measurement invariance in PLS-SEM analyses.

Advantages of nonlinear over segmentation analyses in path models


Nonlinear analyses employing the software WarpPLS allow for the identification of linear segments emerging from a nonlinear analysis, but without the need to generate subsamples. A new article is available demonstrating the advantages of nonlinear over data segmentation analyses. These include a larger overall sample size for calculation of P values, and the ability to uncover very high segment-specific path coefficients. Its reference, abstract, and link to full text are available below.

Kock, N. (2016). Advantages of nonlinear over segmentation analyses in path models. International Journal of e-Collaboration, 12(4), 1-6.

The recent availability of software tools for nonlinear path analyses, such as WarpPLS, enables e-collaboration researchers to take nonlinearity into consideration when estimating coefficients of association among linked variables. Nonlinear path analyses can be applied to models with or without latent variables, and provide advantages over data segmentation analyses, including those employing finite mixture segmentation techniques (a.k.a. FIMIX). The latter assume that data can be successfully segmented into subsamples, which are then analyzed with linear algorithms. Nonlinear analyses employing WarpPLS also allow for the identification of linear segments mirroring underlying nonlinear relationships, but without the need to generate subsamples. We demonstrate the advantages of nonlinear over data segmentation analyses.

Among other things this article shows that identification of linear segments emerging from a nonlinear analysis with WarpPLS allows for: (a) a larger overall sample size for calculation of P values, which enables researchers to uncover actual segment-specific effects that could otherwise be rendered non-significant due to a combination of underestimated path coefficients and small subsample sizes; and (b) the ability to uncover very high segment-specific path coefficients, which could otherwise be grossly underestimated.

Enjoy!

Thursday, September 1, 2016

Hypothesis testing with confidence intervals and P values


An article is now available explaining how WarpPLS users can test hypotheses based on confidence intervals, contrasting that approach with the one employing P values. A variation of the latter approach, employing T ratios, is also briefly discussed. Below are the reference, link to PDF file, and abstract for the article.

Kock, N. (2016). Hypothesis testing with confidence intervals and P values in PLS-SEM. International Journal of e-Collaboration, 12(3), 1-6.

PDF file.

Abstract:
E-collaboration researchers usually employ P values for hypothesis testing, a common practice in a variety of other fields. This is also customary in many methodological contexts, such as analyses of path models with or without latent variables, as well as simpler tests that can be seen as special cases of these (e.g., comparisons of means). We discuss here how a researcher can use another major approach for hypothesis testing, the one building on confidence intervals. We contrast this approach with the one employing P values through the analysis of a simulated dataset, created based on a model grounded on past theory and empirical research. The model refers to social networking site use at work and its impact on job performance. The results of our analyses suggest that tests employing confidence intervals and P values are likely to lead to very similar outcomes in terms of acceptance or rejection of hypotheses.

Note 1:
On Table 1 in the article, each T ratio and confidence interval limits (lower and upper) are calculated through the formulas included below. Normally a hypothesis will not be supported if the confidence interval includes the number 0 (zero).

T ratio = (path coefficient) / (standard error).

Lower confidence interval = (path coefficient) - 1.96 * (standard error).

Upper confidence interval = (path coefficient) + 1.96 * (standard error).

Note 2:
Here is a quick note to technical readers. The P values reported in Table 1 in the article are calculated based on the T ratios using the incomplete beta function, which does not assume that the T distribution is exactly normal. In reality, T distributions have heavier tails than normal distributions, with the difference becoming less noticeable as sample sizes increase.


Wednesday, June 15, 2016

Simpson’s paradox, moderation, and the emergence of quadratic relationships in path models


Among the many innovative features of WarpPLS are those that deal with identification of Simpson’s paradox and modeling of nonlinear relationships. A new article discussing various issues that are important for the understanding of the usefulness of these features is now available. Its reference, abstract, and link to full text are available below.

Kock, N., & Gaskins, L. (2016). Simpson’s paradox, moderation, and the emergence of quadratic relationships in path models: An information systems illustration. International Journal of Applied Nonlinear Science, 2(3), 200-234.

While Simpson’s paradox is well-known to statisticians, it seems to have been largely neglected in many applied fields of research, including the field of information systems. This is problematic because of the strange nature of the phenomenon, the wrong conclusions and decisions to which it may lead, and its likely frequency. We discuss Simpson’s paradox and interpret it from the perspective of path models with or without latent variables. We define it mathematically and argue that it arises from incorrect model specification. We also show how models can be correctly specified so that they are free from Simpson’s paradox. In the process of doing so, we show that Simpson’s paradox may be a marker of two types of co-existing relationships that have been attracting increasing interest from information systems researchers, namely moderation and quadratic relationships.

Among other things this article shows that: (a) Simpson’s paradox may be caused by model misspecification, and thus can in some cases be fixed by proper model specification; (b) a type of model misspecification that may cause Simpson’s paradox involves missing a moderation relationship that exists at the population level; (c) Simpson’s paradox may actually be a marker of nonlinear relationships of the quadratic type, which are induced by moderation; and (d) there is a duality involving moderation and quadratic relationships, which requires separate and targeted analyses for their proper understanding.

Enjoy!

Saturday, June 11, 2016

Interview video: Conference on Information Systems in Latin America


Recently an interview was conducted for the 3rd Conference on Information Systems in Latin America. In it, Dr. Ned Kock was interviewed by Dr. Alexandre Graeml. The topics covered include: structural equation modeling (SEM), partial least squares (PLS) and related techniques, PLS-based SEM, covariance-based SEM, factors versus composites, nonlinear analyses, and WarpPLS.

WarpPLS and its application to research in business and information systems

The link below is for the Conference’s web site.

ISLA 2016 - Information Systems in Latin America Conference

Enjoy!

Thursday, June 9, 2016

PLS-SEM performance with non-normal data


Many claims have been made in the past about the advantages of structural equation modeling employing the partial least squares method (PLS-SEM). While some claims may have been exaggerated, we are continuously finding that others have not. One of such claims, falling in the latter category (i.e., not an exaggeration), is that PLS-SEM is robust to deviations from normality. In other words, PLS-SEM performs quite well with non-normal data.

A new article illustrating this advantage of PLS-SEM is now available. Its reference, abstract, and link to full text are available below.

Kock, N. (2016). Non-normality propagation among latent variables and indicators in PLS-SEM simulations. Journal of Modern Applied Statistical Methods, 15(1), 299-315.

Structural equation modeling employing the partial least squares method (PLS-SEM) has been extensively used in business research. Often the use of this method is justified based on claims about its unique performance with small samples and non-normal data, which call for performance analyses. How normal and non-normal data are created for the performance analyses are examined. A method is proposed for the generation of data for exogenous latent variables and errors directly, from which data for endogenous latent variables and indicators are subsequently obtained based on model parameters. The emphasis is on the issue of non-normality propagation among latent variables and indicators, showing that this propagation can be severely impaired if certain steps are not taken. A key step is inducing non-normality in structural and indicator errors, in addition to exogenous latent variables. Illustrations of the method and its steps are provided through simulations based on a simple model of the effect of e-collaboration technology use on job performance.

The article’s main goal is actually to discuss a method to create non-normal data where the data creator has full access to all data elements, including factor or composite scores and all error terms, and where severe non-normality is extended to error terms. In the process of achieving this goal, the article actually demonstrates that PLS-SEM is very robust to severe deviations from normality, even when these deviations apply to all error terms. This is an issue that is often glossed over in PLS-SEM performance tests with non-normal data.

Readers may also find the YouTube video linked below useful in the context of this discussion.

View Skewness and Kurtosis in WarpPLS

Enjoy!

A thank you note to the participants in the 2016 PLS Applications Symposium


This is just a thank you note to those who participated, either as presenters or members of the audience, in the 2016 PLS Applications Symposium:

http://plsas.net/

As in previous years, it seems that it was a good idea to run the Symposium as part of the Western Hemispheric Trade Conference. This allowed attendees to take advantage of a subsidized registration fee, and also participate in other Conference sessions and the Conference's social event.

I have been told that the proceedings will be available soon from the Western Hemispheric Trade Conference web site.

Also, the full-day workshop on PLS-SEM using the software WarpPLS was well attended. This workshop was fairly hands-on and interactive. Some participants had quite a great deal of expertise in PLS-SEM and WarpPLS. It was a joy to have conducted the workshop!

As soon as we define the dates, we will be announcing next year’s PLS Applications Symposium. Like this years’ Symposium, it will take place in Laredo, Texas, probably in mid-April as well.

Thank you and best regards to all!