Organizational Research Methods
1552-7425
1094-4281
Mỹ
Cơ quản chủ quản: SAGE Publications Inc.
Các bài báo tiêu biểu
For all its richness and potential for discovery, qualitative research has been critiqued as too often lacking in scholarly rigor. The authors summarize a systematic approach to new concept development and grounded theory articulation that is designed to bring “qualitative rigor” to the conduct and presentation of inductive research.
Việc thiết lập tính bất biến đo lường giữa các nhóm là một điều kiện tiên quyết hợp lý để tiến hành so sánh liên nhóm chính xác (ví dụ như kiểm định sự khác biệt trung bình nhóm, sự bất biến của các ước tính tham số cấu trúc), tuy nhiên tính bất biến đo lường hiếm khi được kiểm tra trong nghiên cứu tổ chức. Trong bài báo này, các tác giả (a) làm rõ tầm quan trọng của việc thực hiện các kiểm định tính bất biến đo lường giữa các nhóm, (b) xem xét các thực hành khuyến nghị cho việc thực hiện các kiểm định tính bất biến đo lường, (c) điểm lại ứng dụng của các kiểm định tính bất biến đo lường trong ứng dụng thực tiễn, (d) thảo luận về các vấn đề liên quan đến kiểm định các khía cạnh khác nhau của tính bất biến đo lường, (e) trình bày một ví dụ thực nghiệm về phân tích tính bất biến đo lường theo thời gian, và (f) đề xuất một mô hình tích hợp cho việc thực hiện các dãy kiểm định tính bất biến đo lường.
We aim to develop a meaningful single-source reference for management and organization scholars interested in using bibliometric methods for mapping research specialties. Such methods introduce a measure of objectivity into the evaluation of scientific literature and hold the potential to increase rigor and mitigate researcher bias in reviews of scientific literature by aggregating the opinions of multiple scholars working in the field. We introduce the bibliometric methods of citation analysis, co-citation analysis, bibliographical coupling, co-author analysis, and co-word analysis and present a workflow for conducting bibliometric studies with guidelines for researchers. We envision that bibliometric methods will complement meta-analysis and qualitative structured literature reviews as a method for reviewing and evaluating scientific literature. To demonstrate bibliometric methods, we performed a citation and co-citation analysis to map the intellectual structure of the Organizational Research Methods journal.
It has become widely accepted that correlations between variables measured with the same method, usually self-report surveys, are inflated due to the action of common method variance (CMV), despite a number of sources that suggest the problem is overstated. The author argues that the popular position suggesting CMV automatically affects variables measured with the same method is a distortion and oversimplification of the true state of affairs, reaching the status of urban legend. Empirical evidence is discussed casting doubt that the method itself produces systematic variance in observations that inflates correlations to any significant degree. It is suggested that the term common method variance be abandoned in favor of a focus on measurement bias that is the product of the interplay of constructs and methods by which they are assessed. A complex approach to dealing with potential biases involves their identification and control to rule them out as explanations for observed relationships using a variety of design strategies.
The use of interrater reliability (IRR) and interrater agreement (IRA) indices has increased dramatically during the past 20 years. This popularity is, at least in part, because of the increased role of multilevel modeling techniques (e.g., hierarchical linear modeling and multilevel structural equation modeling) in organizational research. IRR and IRA indices are often used to justify aggregating lower-level data used in composition models. The purpose of the current article is to expose researchers to the various issues surrounding the use of IRR and IRA indices often used in conjunction with multilevel models. To achieve this goal, the authors adopt a question-and-answer format and provide a tutorial in the appendices illustrating how these indices may be computed using the SPSS software.
This article addresses Rönkkö and Evermann’s criticisms of the partial least squares (PLS) approach to structural equation modeling. We contend that the alleged shortcomings of PLS are not due to problems with the technique, but instead to three problems with Rönkkö and Evermann’s study: (a) the adherence to the common factor model, (b) a very limited simulation designs, and (c) overstretched generalizations of their findings. Whereas Rönkkö and Evermann claim to be dispelling myths about PLS, they have in reality created new myths that we, in turn, debunk. By examining their claims, our article contributes to reestablishing a constructive discussion of the PLS method and its properties. We show that PLS does offer advantages for exploratory research and that it is a viable estimator for composite factor models. This can pose an interesting alternative if the common factor model does not hold. Therefore, we can conclude that PLS should continue to be used as an important statistical tool for management and organizational research, as well as other social science disciplines.
Previous research has recommended several measures of effect size for studies with repeated measurements in both treatment and control groups. Three alternate effect size estimates were compared in terms of bias, precision, and robustness to heterogeneity of variance. The results favored an effect size based on the mean pre-post change in the treatment group minus the mean pre-post change in the control group, divided by the pooled pretest standard deviation.
Because of the importance of mediation studies, researchers have been continuously searching for the best statistical test for mediation effect. The approaches that have been most commonly employed include those that use zero-order and partial correlation, hierarchical regression models, and structural equation modeling (SEM). This study extends MacKinnon and colleagues (MacKinnon, Lockwood, Hoffmann, West, & Sheets, 2002; MacKinnon, Lockwood, & Williams, 2004, MacKinnon, Warsi, & Dwyer, 1995) works by conducting a simulation that examines the distribution of mediation and suppression effects of latent variables with SEM, and the properties of confidence intervals developed from eight different methods. Results show that SEM provides unbiased estimates of mediation and suppression effects, and that the bias-corrected bootstrap confidence intervals perform best in testing for mediation and suppression effects. Steps to implement the recommended procedures with Amos are presented.
Many researchers who use same-source data face concerns about common method variance (CMV). Although post hoc statistical detection and correction techniques for CMV have been proposed, there is a lack of empirical evidence regarding their efficacy. Because of disagreement among scholars regarding the likelihood and nature of CMV in self-report data, the current study evaluates three post hoc strategies and the strategy of doing nothing within three sets of assumptions about CMV: that CMV does not exist, that CMV exists and has equal effects across constructs, and that CMV exists and has unequal effects across constructs. The implications of using each strategy within each of the three assumptions are examined empirically using 691,200 simulated data sets varying factors such as the amount of true variance and the amount and nature of CMV modeled. Based on analyses of these data, potential benefits and likely risks of using the different techniques are detailed.
A common practice in applications of structural equation modeling techniques is to create composite measures from individual items. The purpose of this article was to provide an empirical comparison of several composite formation methods on model fit. Data from 1, 177 public school teachers were used to test a model of union commitment in which alternative composite formation methods were used to specify the measurement components of the model. Bootstrapping procedures were used to generate data for two additional sample sizes. Results indicated that the use of composites, in general, resulted in improved overall model fit as compared to treating all items as individual indicators. Lambda values and explained criterion variance indicated that this improved model fit was due to the creation of strong measurement models. Implications of these results for researchers using composites are discussed.