Research and Publications - Department of Information Systems, Statistics & Management Science

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 11 of 11
  • Item
    Exact Tolerance Interval with Specified Tail Probabilities and a Control Chart for the Sample Variance
    (Wiley, 2022-12-02) Mosab Alqurashi; Subha Chakraborti; Chase Holcombe
    A 𝛽-content tolerance interval (TI) is a statistical interval which contains at least some fraction (proportion) of the population with a given confidence level. When we are interested in the precision of a quality characteristic, a TI for the sample variance is useful. In this paper, we consider an exact two-sided 𝛽-content TI for the sample variance from a normal distribution with a specified ratio of the tail probabilities. The proposed tolerance interval allows the practitioner more control over how the probabilities in the tails are distributed, which may be useful in certain applications. A comparison with an existing two-sided 𝛽-content TI shows that the proposed TI is better on the basis of expected coverage and standard deviation of the coverage. In addition, the proposed TI is shown to require fewer subgroups to achieve a specific accuracy level. Moreover, a phase II control chart with guaranteed performance is obtained from the proposed TI. Finally, a real and a simulated data are used for illustration.
  • Item
    A Distribution-free Phase II Control Chart for Multivariate Individual Data with Simple Post Signal Diagnostics
    (Taylor & Francis, 2024-02-13) Chase Holcombe; Mosab Alqurashi; Subha Chakraborti
    Multivariate statistical process control (MSPC) charts are particularly useful when there is a need to simultaneously monitor several quality characteristics of a process. Most of the control charts in MSPC assume that the quality characteristics follow some parametric multivariate distribution, such as the normal. This assumption is almost impossible to be justified in practice. Distribution-free MSPC charts are attractive, as they can overcome this hurdle by guaranteeing a stable (or in-control (IC)) performance of the control chart without the assumption of a parametric multivariate process distribution. Utilizing an existing distribution-free multivariate tolerance interval, we propose a Phase II Shewhart-type distribution-free MSPC chart for individual observations, with control limits based on Phase I order statistics. In addition to being easy to interpret, the proposed chart preserves the original scale of measurements and can easily identify out-of-control variables after a signal. The exact in-control performance based on the conditional and unconditional perspectives is presented and examined along with the control limits determination. The out-of-control performance of the chart is studied by simulation for data from a number of multivariate distributions. Illustrative examples are provided for chart implementation, using both real and simulated data, along with a summary and conclusions.
  • Item
    Popperian Falsificationism in IS: Major Confusions and Harmful Influences
    (Association for Information Systems, 2023) Mao, Mingsong; Siponen, Mikko; Nthan Marco
    The current relation between Popper’s philosophy of science and Information Systems (IS) is complex and often confused. On the one hand, many influential members of the IS community claim that much IS research follows Popper’s falsificationism. On the other hand, many assumptions underlying Popper’s falsificationism, including the nature of theories as an exceptionless laws rejected by a singular unsupportive observation, are inappropriate and misleading. Moreover, Popper also rejected all inductive inferences and inductive methods as unscientific which, alas, has led some influential IS scholars to dismiss inductive inferences in major IS methodologies. Such Popperian advice is harmful as virtually all statistical or qualitative IS research relies on inductive inferences – and there is nothing wrong with that. Finally, we offer a solution for how to deal with the scientific significance of the problem of induction. This solution is inductive fallibilism. This means recognizing that theories, rather than always being held as true or false simpliciter, often contain varying inductive supportive and unsupportive evidence.
  • Item
    The Impacts of Internet Monitoring on Employees' Cyberloafing and Organizational Citizenship Behavior: A Longitudinal Field Quasi-Experiment
    (Information Systems Research, 2023) Jiang, Hemin; Siponen, Mikko; Jiang, Zhenhui; Tsohou, Aggeliki
    Many organizations have adopted internet monitoring to regulate employees’ cyberloafing behavior. Although one might intuitively assume that internet monitoring can be effective in reducing cyberloafing, there is a lack of research examining why the effect can occur and whether it can be sustained. Furthermore, little research has investigated whether internet monitoring can concurrently induce any side effects in employee behavior. In this paper, we conducted a longitudinal field quasi-experiment to examine the impacts of internet monitoring on employees’ cyberloafing and organizational citizenship behavior (OCB). Our results show that internet monitoring did reduce employees’ cyberloafing by augmenting employees’ perceived sanction concerns and information privacy concerns related to cyberloafing. The results also show that internet monitoring could produce the side effect of reducing employees’ OCB. Interestingly, when examining the longitudinal effects of internet monitoring four months after its implementation, we found that the effect of internet monitoring on cyberloafing was not sustained, but the effect on OCB toward organizations still persisted. Our study advances the literature on deterrence theory by empirically investigating both the intended and side effects of deterrence and how the effects change over time. It also has important broader implications for practitioners who design and implement information systems to regulate employee noncompliance behavior.
  • Item
    Stage Theorizing in Behavioral Information Systems Security Research
    (HICSS, 2024) Siponen, Mikko
    In information systems (IS) and IS security (ISS) literature, models are commonly divided into variance and process models. In other scientific disciplines, models are instead commonly divided into stage-less versus stage models. This division is also useful in ISS for two reasons. First, despite common claims, most IS and ISS models, especially in behavioral research, may not be variance models. Second, not only users’ ISS behavior but also their reasons for it may change over time. Stage models can be helpful in capturing this development and change in terms of idealized stages. However, while stage models exist in IS(S), their philosophical foundations benefit from clarifications. For instance, the requirements for stage theories cannot be unreservedly copied from other disciplines, such as health psychology, for use in ISS research. ISS scholars must consider a case-by-case basis in building a stage model. To aid in this, cyber security examples are used here to illustrate the concepts and usefulness of stage models. I also explain how stage models differ from process models, which also model change.
  • Item
    When Empirical Contributions are More Important Than Theoretical Contributions
    (ECIS, 2024) Siponen, Mikko; Jiang, Hemin; Klaavuniemi, Tuula
    Making a theoretical contribution (TC) is a common requirement for the top Information Systems (IS) journals. We argue that the role of TC is misunderstood in IS. In IS, TC is a requirement for paper acceptance. However, TC should be required at the level of research programs. In fact, research programs commonly require studies where the contribution is empirical, and TC comes later. Empirical contributions include (i) obtaining stronger empirical tests, (ii) finding anomalies, (iii) examining a long-term effect or result, and (iv) comparing their effect with rival theories. To repair the situation, we first argue for requiring TC at the level of research programs. We then propose that IS community should recognize studies (e.g., i–iv) in which the nature of contribution is empirical, and TC comes later. We further suggest that the problems related HARKing (Hypothesizing After Results are Known) is minimized, not by requiring TC, but subjecting the empirical findings to stronger causal tests.
  • Item
    Testing the Dominant Mediator in EPPM: An Empirical Study on Household Anti-Malware Software Users
    (Elsevier, Inc., 2024) Xie, Yitian; Siponen, Mikko; Laatikainen, Gabriella; Moody, Gregory D.; Zheng, Xiaosong
    A key research area in information systems security (ISec) is explaining or improving users’ IS security outcomes via the extended parallel process model (EPPM) lens. While the theoretical construct in emotional valence (e.g., fear) and cognitive valence (e.g., perceived efficacy) were deemed as mediators in previous EPPM-related ISecstudies, existing research has ignored the value of testing and reporting the dominant mediator between the emotional valence and the cognitive valence. In this paper, we reintroduce the theoretical origins of the dominant mediator assumption in EPPM and highlight its merits using the multiple mediation method. Theoretically, we illustrate how testing and reporting the dominant mediator can help identify the dominant mechanism triggering specific behavioral outcomes. Further, this paper questions the dominant mediating role of fear on the behavioral outcome in ISec context. Methodologically, this study proposes to assess the dominant mediator via a multiple mediation model instead of using the discriminant value equation introduced by Witte (1995), Witte et al. (1996) and enhanced by Chen et al. (2021) when testing the EPPM theory in the ISec context.
  • Item
    Effective screening strategies for safe opening of universities under Omicron and Delta variants of COVID-19
    (Nature Portfolio, 2022) Rabil, Marie Jeanne; Tunc, Sait; Bish, Douglas R.; Bish, Ebru K.; Virginia Polytechnic Institute & State University; University of Alabama Tuscaloosa
    As new COVID-19 variants emerge, and disease and population characteristics change, screening strategies may also need to change. We develop a decision-making model that can assist a college to determine an optimal screening strategy based on their characteristics and resources, considering COVID-19 infections/hospitalizations/deaths; peak daily hospitalizations; and the tests required. We also use this tool to generate screening guidelines for the safe opening of college campuses. Our compartmental model simulates disease spread on a hypothetical college campus under co-circulating variants with different disease dynamics, considering: (i) the heterogeneity in disease transmission and outcomes for faculty/staff and students based on vaccination status and level of natural immunity; and (ii) variant- and dose-dependent vaccine efficacy. Using the Spring 2022 academic semester as a case study, we study routine screening strategies, and find that screening the faculty/staff less frequently than the students, and/or the boosted and vaccinated less frequently than the unvaccinated, may avert a higher number of infections per test, compared to universal screening of the entire population at a common frequency. We also discuss key policy issues, including the need to revisit the mitigation objective over time, effective strategies that are informed by booster coverage, and if and when screening alone can compensate for low booster coverage.
  • Item
    Benefits of integrated screening and vaccination for infection control
    (PLOS, 2022) Rabil, Marie Jeanne; Tunc, Sait; Bish, Douglas R.; Bish, Ebru K.; Virginia Polytechnic Institute & State University; University of Alabama Tuscaloosa
    ImportanceScreening and vaccination are essential in the fight against infectious diseases, but need to be integrated and customized based on community and disease characteristics. ObjectiveTo develop effective screening and vaccination strategies, customized for a college campus, to reduce COVID-19 infections, hospitalizations, deaths, and peak hospitalizations. Design, setting, and participantsWe construct a compartmental model of disease spread under vaccination and routine screening, and study the efficacy of four mitigation strategies (routine screening only, vaccination only, vaccination with partial or full routine screening), and a no-intervention strategy. The study setting is a hypothetical college campus of 5,000 students and 455 faculty members during the Fall 2021 academic semester, when the Delta variant was the predominant strain. For sensitivity analysis, we vary the screening frequency, daily vaccination rate, initial vaccine coverage, and screening and vaccination compliance; and consider scenarios that represent low/medium/high transmission and test efficacy. Model parameters come from publicly available or published sources. ResultsWith low initial vaccine coverage (30% in our study), even aggressive vaccination and screening result in a high number of infections: 1,020 to 2,040 (1,530 to 2,480) with routine daily (every other day) screening of the unvaccinated; 280 to 900 with daily screening extended to the newly vaccinated in base- and worst-case scenarios, which respectively consider reproduction numbers of 4.75 and 6.75 for the Delta variant. ConclusionIntegrated vaccination and routine screening can allow for a safe opening of a college when both the vaccine effectiveness and the initial vaccine coverage are sufficiently high. The interventions need to be customized considering the initial vaccine coverage, estimated compliance, screening and vaccination capacity, disease transmission and adverse outcome rates, and the number of infections/peak hospitalizations the college is willing to tolerate.
  • Item
    Phase II control charts for monitoring dispersion when parameters are estimated
    (Taylor & Francis, 2017) Diko, M. D.; Goedhart, R.; Chakraborti, S.; Does, R. J. M. M.; Epprecht, E. K.; University of Amsterdam; University of Alabama Tuscaloosa; Pontificia Universidade Catolica do Rio de Janeiro
    Shewhart control charts are among the most popular control charts used to monitor process dispersion. To base these control charts on the assumption of known in-control process parameters is often unrealistic. In practice, estimates are used to construct the control charts and this has substantial consequences for the in-control and out-of-control chart performance. The effects are especially severe when the number of Phase I subgroups used to estimate the unknown process dispersion is small. Typically, recommendations are to use around 30 subgroups of size 5 each.We derive and tabulate new corrected charting constants that should be used to construct the estimated probability limits of the Phase II Shewhart dispersion (e.g., range and standard deviation) control charts for a given number of Phase I subgroups, subgroup size and nominal in-control average run-length (ICARL). These control limits account for the effects of parameter estimation. Two approaches are used to find the new charting constants, a numerical and an analytic approach, which give similar results. It is seen that the corrected probability limits based charts achieve the desired nominal ICARL performance, but the out-of-control average run-length performance deteriorate when both the size of the shift and the number of Phase I subgroups are small. This is the price one must pay while accounting for the effects of parameter estimation so that the in-control performance is as advertised. An illustration using real-life data is provided along with a summary and recommendations.
  • Item
    Shewhart control charts for dispersion adjusted for parameter estimation
    (Taylor & Francis, 2017) Goedhart, Rob; da Silva, Michele M.; Schoonhoven, Marit; Epprecht, Eugenio K.; Chakraborti, Subha; Does, Ronald J. M. M.; Veiga, Alvaro; University of Amsterdam; Pontificia Universidade Catolica do Rio de Janeiro; University of Alabama Tuscaloosa
    Several recent studies have shown that the number of Phase I samples required for a Phase II control chart with estimated parameters to perform properly may be prohibitively high. Looking for a more practical alternative, adjusting the control limits has been considered in the literature. We consider this problem for the classic Shewhart charts for process dispersion under normality and present an analytical method to determine the adjusted control limits. Furthermore, we examine the performance of the resulting chart at signaling increases in the process dispersion. The proposed adjustment ensures that a minimum in-control performance of the control chart is guaranteed with a specified probability. This performance is indicated in terms of the false alarm rate or, equivalently, the in-control average run length. We also discuss the tradeoff between the in-control and out-of-control performance. Since our adjustment is based on exact analytical derivations, the recently suggested bootstrapmethod is no longer necessary. A real-life example is provided in order to illustrate the proposed methodology.