Department of Information Systems, Statistics & Management Science
Permanent URI for this community
Browse
Browsing Department of Information Systems, Statistics & Management Science by Title
Now showing 1 - 20 of 60
Results Per Page
Sort Options
Item A Distribution-free Phase II Control Chart for Multivariate Individual Data with Simple Post Signal Diagnostics(Taylor & Francis, 2024-02-13) Chase Holcombe; Mosab Alqurashi; Subha ChakrabortiMultivariate statistical process control (MSPC) charts are particularly useful when there is a need to simultaneously monitor several quality characteristics of a process. Most of the control charts in MSPC assume that the quality characteristics follow some parametric multivariate distribution, such as the normal. This assumption is almost impossible to be justified in practice. Distribution-free MSPC charts are attractive, as they can overcome this hurdle by guaranteeing a stable (or in-control (IC)) performance of the control chart without the assumption of a parametric multivariate process distribution. Utilizing an existing distribution-free multivariate tolerance interval, we propose a Phase II Shewhart-type distribution-free MSPC chart for individual observations, with control limits based on Phase I order statistics. In addition to being easy to interpret, the proposed chart preserves the original scale of measurements and can easily identify out-of-control variables after a signal. The exact in-control performance based on the conditional and unconditional perspectives is presented and examined along with the control limits determination. The out-of-control performance of the chart is studied by simulation for data from a number of multivariate distributions. Illustrative examples are provided for chart implementation, using both real and simulated data, along with a summary and conclusions.Item Advances in mixture modeling and model based clustering(University of Alabama Libraries, 2015) Michael, Semhar K.; Melnykov, Volodymyr; University of Alabama TuscaloosaCluster analysis is part of unsupervised learning that deals with finding groups of similar observations in heterogeneous data. There are several clustering approaches with the goal of minimizing the within cluster variance while maximizing the variance between clusters. K-means or hierarchical clustering with different linkages can be thought as distance-based approaches. Another approach is model-based which relies on the idea of finite mixture models. This dissertations will propose new advances in clustering area mostly related to model-based clustering and its extension to the K-means algorithm. This report has five chapters. The first chapter is a literature review on recent advances in the area of model-based clustering and finite mixture modeling. Main advances and challenges are described in the methodology section. Then some interesting and diverse applications of model-based clustering are presented in the application section. The second chapter deals with a simulation study conducted to analyze the factors that affect complexity of model-based clustering. In the third chapter we develop a methodology for model-based clustering of regression time series data and show its application to annual tree rings. In the fourth chapter, we utilize the relationship between model-based clustering and the Kmeans algorithm to develop a methodology for merging clusters formed by K-means to find meaningful grouping. The final chapter is dedicated to the problem of initialization in model-based clustering. It is well known fact that the performance of model-based clustering is highly dependent on initialization of the EM algorithm. So far there is no method that comprehensively works in all situations. In this project, we use the idea of model averaging and initialization using the emEM algorithm to solve this problem.Item Advancing evidence-based practice in systems development: providing juried knowledge to software professionals(University of Alabama Libraries, 2014) Hassler, Edgar E.; Hale, David P.; University of Alabama TuscaloosaThe concept of utilizing information derived from carefully crafted scientific research to optimize the efficiency and/or effectiveness of a practice is by no means a new idea. For over two thousand years physicians, scientists, and business professionals have relied on evidence to improve decision making. The advances made over last 50 years with regard to Information Systems (IS), and the proliferation of technology, set the stage for a new paradigm in the use of information in practice commonly referred to as Evidence-Based Practice (EBP). Originating in the field of medicine, the EBP paradigm has been adopted in many of the healthcare domains and spread to other domains such as education, management, and computer science. In the spirit of Fredrick Taylor's The Principles of Scientific Management (1914), the collection of essays presented in this work endeavor to advance the use of empirically based, juried evidence for decision making in a business context. The specific context selected is the domain of SE - a key component in a technology laden world. Within the SE domain, the essays address three objectives in the advancement of the EBSE paradigm by: 1. Mapping the research completed to date regarding the implementation of EBP in the SE domain to identify gaps and opportunities in the research. 2. Identifying the barriers deemed most important by the members of the SE research community who conduct systematic literature reviews in support of EBSE. 3. Developing the use of algorithmic techniques as a discriminant function in the selection process of the systematic review methodologies. Together, the collection of essays represent a line of inquiring within a broader research stream concerning the implementation of EBP - a modern version of Taylors work - within the SE domain. The collection of essays provides valuable insights concerning the status of EBSE and its literature, the problems associated with secondary research under the paradigm, and the basis for a discrimination function designed to assist in resolving a key issue for those seeking guidance in academic literature.Item Benefits of integrated screening and vaccination for infection control(PLOS, 2022) Rabil, Marie Jeanne; Tunc, Sait; Bish, Douglas R.; Bish, Ebru K.; Virginia Polytechnic Institute & State University; University of Alabama TuscaloosaImportanceScreening and vaccination are essential in the fight against infectious diseases, but need to be integrated and customized based on community and disease characteristics. ObjectiveTo develop effective screening and vaccination strategies, customized for a college campus, to reduce COVID-19 infections, hospitalizations, deaths, and peak hospitalizations. Design, setting, and participantsWe construct a compartmental model of disease spread under vaccination and routine screening, and study the efficacy of four mitigation strategies (routine screening only, vaccination only, vaccination with partial or full routine screening), and a no-intervention strategy. The study setting is a hypothetical college campus of 5,000 students and 455 faculty members during the Fall 2021 academic semester, when the Delta variant was the predominant strain. For sensitivity analysis, we vary the screening frequency, daily vaccination rate, initial vaccine coverage, and screening and vaccination compliance; and consider scenarios that represent low/medium/high transmission and test efficacy. Model parameters come from publicly available or published sources. ResultsWith low initial vaccine coverage (30% in our study), even aggressive vaccination and screening result in a high number of infections: 1,020 to 2,040 (1,530 to 2,480) with routine daily (every other day) screening of the unvaccinated; 280 to 900 with daily screening extended to the newly vaccinated in base- and worst-case scenarios, which respectively consider reproduction numbers of 4.75 and 6.75 for the Delta variant. ConclusionIntegrated vaccination and routine screening can allow for a safe opening of a college when both the vaccine effectiveness and the initial vaccine coverage are sufficiently high. The interventions need to be customized considering the initial vaccine coverage, estimated compliance, screening and vaccination capacity, disease transmission and adverse outcome rates, and the number of infections/peak hospitalizations the college is willing to tolerate.Item Construction of estimation-equivalent second-order split-split-plot designs(University of Alabama Libraries, 2011) Yuan, Fang; Perry, Marcus B.; University of Alabama TuscaloosaIn many experimental settings, some experimental factors are very hard to change or very expensive to change, some factors are hard to change, and some factors are easy to change, which usually leads to a split-split-plot design. In such a case, there are randomization restrictions in our experiments. If the data is analyzed as if it were a completely randomized design, the results could be misleading. The analysis of split-split-plot designs is more complicated relative to the completely randomized design, as generalized least squares (GLS) is recommended for estimating the factor effects, and restricted maximum likelihood (REML) is recommended for estimating the variance components. As an alternative, one can consider estimation-equivalent designs, wherein ordinary least squares (OLS) and GLS estimates of the factor effects are equivalent. These designs provide practical benefits from the perspective of design selection and estimation and are consistent with traditional response surface methods. Although much work has been done with respect to estimation-equivalent second-order split-plot designs, less emphasis has been placed on split-split-plot (and higher strata) designs of this type. My research is to derive the general conditions for achieving OLS-GLS equivalence and use these conditions to construct balanced and unbalanced estimation-equivalent second-order split-split-plot designs from the central composite design (CCD).Item Contributions to joint monitoring of location and scale parameters: some theory and applications(University of Alabama Libraries, 2012) McCracken, Amanda Kaye; Chakraborti, Subhabrata; University of Alabama TuscaloosaSince their invention in the 1920s, control charts have been popular tools for use in monitoring processes in fields as varied as manufacturing and healthcare. Most of these charts are designed to monitor a single process parameter, but recently, a number of charts and schemes for jointly monitoring the location and scale of processes which follow two-parameter distributions have been developed. These joint monitoring charts are particularly relevant for processes in which special causes may result in a simultaneous shift in the location parameter and the scale parameter. Among the available schemes for jointly monitoring location and scale parameters, the vast majority are designed for normally distributed processes for which the in-control mean and variance are known rather than estimated from data. When the process data are non-normally distributed or the process parameters are unknown, alternative control charts are needed. This dissertation presents and compares several control schemes for jointly monitoring data from Laplace and shifted exponential distributions with known parameters as well as a pair of charts for monitoring data from normal distributions with unknown mean and variance. The normal theory charts are adaptations of two existing procedures for the known parameter case, Razmy's (2005) Distance chart and Chen and Cheng's (1998) Max chart, while the Laplace and shifted exponential charts are designed using an appropriate statistic for each parameter, such as the maximum likelihood estimators.Item Contributions to multivariate control charting: studies of the Z chart and four nonparametric charts(University of Alabama Libraries, 2010) Boone, Jeffrey Michael; Chakraborti, Subhabrata; University of Alabama TuscaloosaAutocorrelated data are common in today's process control applications. Many of these applications involve two or more related variables so that multivariate statistical process control (SPC) methods should be used in process monitoring since the relationship among the variables should be accounted for. Dealing with multivariate autocorrelated data poses many challenges. Even though no one chart is best for multivariate data, the Z chart proposed by Kalgonda and Kulkarni (2004) is fairly easy to implement and is particularly useful for its diagnostic ability, which is to pinpoint the variable(s) that is(are) out of control in case the chart signals. In this dissertation, the performance of the Z chart is compared to the chi-square chart and the multivariate EWMA (MEWMA) chart in a number of simulation studies. Simulations are also performed to study the effects of parameter estimation and non-normality (using the multivariate t and multivariate gamma distributions) on the performance of the Z chart. In addition to the problem of autocorrelation in multivariate quality control, in many quality control applications, the distribution assumption of the data is not met or there is not enough evidence showing that the assumption is met. In many situations, a control chart that does not require a strict distribution assumption, called a nonparametric or distribution-free chart, may be desirable. In this paper, four new multivariate nonparametric Shewhart control charts are proposed. They are relatively simple to use and are based on the multivariate forms of the sign and Wilcoxon signed-rank statistics and the maximum of multiple univariate sign and Wilcoxon signed-rank statistics. The performance of these charts is also studied. Illustrations and applications are also demonstrated.Item Contributions to outlier detection methods: some theory and applications(University of Alabama Libraries, 2011) Dovoedo, Yinaze Herve; Chakraborti, Subhabrata; University of Alabama TuscaloosaTukey's traditional boxplot (Tukey, 1977) is a widely used Exploratory Data Analysis (EDA) tools often used for outlier detection with univariate data. In this dissertation, a modification of Tukey's boxplot is proposed in which the probability of at least one false alarm is controlled, as in Sim et al. 2005. The exact expression for that probability is derived and is used to find the fence constants, for observations from any specified location-scale distribution. The proposed procedure is compared with that of Sim et al., 2005 in a simulation study. Outlier detection and control charting are closely related. Using the preceding procedure, one- and two-sided boxplot-based Phase I control charts for individual observations are proposed for data from an exponential distribution, while controlling the overall false alarm rate. The proposed charts are compared with the charts by Jones and Champ, 2002, in a simulation study. Sometimes, the practitioner is unable or unwilling to make an assumption about the form of the underlying distribution but is confident that the distribution is skewed. In that case, it is well documented that the application of Tukey's boxplot for outlier detection results in increased number of false alarms. To this end, in this dissertation, a modification of the so-called adjusted boxplot for skewed distributions by Hubert and Vandervieren, 2008, is proposed. The proposed procedure is compared to the adjusted boxplot and Tukey's procedure in a simulation study. In practice, the data are often multivariate. The concept of a (statistical) depth (or equivalently outlyingness) function provides a natural, nonparametric, "center-outward" ordering of a multivariate data point with respect to data cloud. The deeper a point, the less outlying it is. It is then natural to use some outlyingness functions as outlier identifiers. A simulation study is performed to compare the outlier detection capabilities of selected outlyingness functions available in the literature for multivariate skewed data. Recommendations are provided.Item Coordination of pricing, advertising, and production decisions for multiple products(University of Alabama Libraries, 2013) Bajwa, Ahmad Naeem; Sox, Charles R.; University of Alabama TuscaloosaThis research aims to develop and propose mathematical models that can be used to facilitate cross-functional coordination between operations and marketing. We consider a dynamic problem of joint pricing, advertising, and production decisions for a profit maximizing firm that produces multiple products. We assume the firm operates in monopolistic environment, where demand for its products is a function of price and advertising expenditure. We model the problem as a mixed integer nonlinear program, incorporating capacity constraints, setup costs, and demand seasonality. We first model and solve the pricing problem without advertising. Later, we extend the model to include advertising as decision variables. The demand for each product is assumed to be continuous, differentiable, strictly decreasing in price and concave in advertising. We present a solution approach which can be used for constant pricing, as well as dynamic pricing strategies. Furthermore, the solution approach is more general and applicable for linear as well as nonlinear demand functions. Using real world data from a manufacturer, we create problem instances, for different demand scenarios at different capacities, and solve for optimal prices for each strategy. We present analytical results that provide managerial insights on how the optimal prices change for different production plans and at different capacities. We compare the firm's profitability for the two pricing strategies, and show that dynamic pricing is valuable at low capacities and when at least one of the products has peak demand in the beginning of the planning horizon. We show that the optimal allocation of advertising budget across products does not change with budget changes. Moreover, the change is minimal with changes in demand seasonality. It is optimal to increase advertising in periods of higher demand and decrease in periods of lower demand. Hence, firms can use rules of thumb and need not to frequently review the allocation. Numerical results show that the proposed algorithms have good convergence properties. Finally, as it is clear from review of academic literature; there are no decision support systems that truly integrate the production/inventory and pricing decisions - specially for multi-product problems. We believe, this work makes valuable contributions in developing solution methodologies that can be incorporated in such decision support systems.Item Coordination of pricing, sourcing and product design decisions in a supply chain(University of Alabama Libraries, 2013) Liu, Bing; Sox, Charles R.; University of Alabama TuscaloosaSupply chain management is more than a movement advocated by a few pioneers today. It is a strategic as well as operational revolution for business operations. Better understanding of the underlying principles of supply chain scenarios helps the widely acceptance that coordination along supply chain links can achieve a win-win situation. In the meanwhile, the penetration of Internet and information technology into every corner of our life enables a large scale of interaction and cooperation between multiple business units and/or end consumers. Supply chain management has gone beyond the early stage when basic cases are discussed for pedagogical purpose. More realistic application scenarios are yet to be identified and investigated. Supply chain performance can be affected by a large set of factors. A variety of supply chain scenarios are in operation. This dissertation is intended to identify a number of typical application scenarios and focus on decision issues integrating pricing, sourcing, product attributes, evolving partnerships between firms. It consists of three manuscripts to be submitted for journal publication. The first paper studies the coordination of pricing and sourcing in two scenarios where multiple suppliers and multiple retail channels are involved. This study investigates the cross price effects between retail channels on the overall profitability. In addition to the analytical analysis, a number of numerical experiments are conducted to investigate realistic issues decision makers may encounter. The second paper studies a supply chain in which a manufacturer sells a configurable product through a retailer. We take the configurable product as a parameterized product and examine the impact of such a feature decision on equilibrium between the manufacturer and the retailer. The analysis considers three different cost functions: (1) linear cost function, (2) quadratic cost function and (3) exponential cost function and examines how the cost functions affect the optimal solutions. The third paper presents a novel framework in which supply chain structure evolves from one stage to the next in terms of changing memberships and business partnerships between members. All of the research work so far on supply chain coordination assumes a static supply chain structure, which remains the same throughout the sequence of events. In reality, however, supply chain memberships and the partnerships between those member firms are not all established at one time and the supply chain structure is subject to evolution. Firms may join or leave a supply chain network and the partnerships between two firms may change as well. Among a large variety of possible evolving scenarios, this research focuses on a case in which a second manufacturer joins a supply chain initially established with one manufacturer and one retailer. The supply chain evolves from the stage with one partnership to the next stage with two partnerships. The two partnerships are established one after another on two competing and differentiated products. Compatibility between demand models in two stages are established. Based on the analytical non-closed form solution, a number of numerical experiments are developed to demonstrate the impact of the introduction of a competing product on the optimal solution of product one. This dissertation conducts a series of exploratory research works on supply chain coordination. It considers coordination not only between multiple players but having multiple strategies involved. It considers products specified with not only price but configurable attributes. It puts forward a novel framework, evolving supply chain, in contrast to static supply chain, the typical research subject in the literature. This dissertation provides a set of insightful results, which help better understand the underlying coordination mechanism and help make wise decisions as needed.Item The development of diagnostic tools for mixture modeling and model-based clustering(University of Alabama Libraries, 2016) Zhu, Xuwen; Melnykov, Volodymyr; University of Alabama TuscaloosaCluster analysis performs unsupervised partition of heterogeneous data. It has applications in almost all fields of study. Model-based clustering is one of the most popular clustering methods these days due to its flexibility and interpretability. It is based on finite mixture models. However, the development of diagnostic tools and visualization tools for clustering procedures is limited. This dissertation is devoted to assessing different properties of the clustering procedure. This report has four chapters. The summary of each chapter is given below: In the first chapter we provide the practitioners with an approach to assess the certainty of a classification made in model-based clustering. The second chapter introduces a novel finite mixture model called Manly mixture model. It is capable of modeling skewness in data and performs diagnostics on the normality of variables. In the third chapter we develop an extension of the traditional K-means procedure that is capable of modeling skewness in data. The fourth chapter contributes to the ManlyMix R package, which is the developed software corresponding to our paper in Chapter 2.Item The Development of Statistical Monitoring Scheme and Simulation Model for the Autocorrelated Process(University of Alabama Libraries, 2020) Wang, Zhi; Perry, Marcus; University of Alabama TuscaloosaThe modern development in data acquisition and storage technologies have allowed for rapid data collection. One representative example is collecting data via high-sample-rate sensors developed with a rate of hundreds or more samples per second. The proximity between the observations can induce high autocorrelation into data sequences. Consequently, develop statistical tools for dealing with the autocorrelated process is of paramount value in modern data analysis. For this reason, the dissertation places primacy upon developing appropriate monitoring schemes and simulation models for the autocorrelated processes. In addition, the complexity of the modern process precludes the using of some conventional statistical approaches that has rigor distribution assumption. The wide practicality of the modern process motivates the work in the dissertation and award the great potential of the future investigation. Statistical process control (SPC) has wide applications in quality engineering, manufacturing industries, social science, disease surveillance, and many other areas. In this dissertation, a distribution-free jointly and independently monitoring scheme for location and scale using individual observations is developed based on the Bernoulli Cumulative Summation (CUSUM) control chart and the Bahadur model. The approach takes autocorrelation into consideration and circumvents the model-misspecification problem. The necessity of the method is appropriately motivated, simulation studies and real-world applications are used to evaluate the reliability and performance of the proposed scheme. Knowing when a process has deviated from the desired in-control status would simplify the control chart post-signal diagnostics. In the dissertation, we developed the maximum likelihood estimators (MLE) of time change point and introduced the built-in change point estimators for CUSUM and binomial exponentially weighted moving average (EWMA) charts. Relative mean index plots are provided and general conclusions are summarized to assist control charts users selecting change point and control chart design combination that guarantees robust change point estimation performance across a range of potential change magnitudes. Another aspect we studied is the simulation of the autocorrelated process. In this dissertation, we developed a simulation approach that permits users to simulate autocorrelated processes from both discrete and continuous distribution with a fully customizable order and structure of autocorrelation. Simulation studies and real-world applications are used to evaluate and illustrate the usefulness of the proposed simulation model.Item Discernable Periods in the Historical Development of Statistical Inference(1967) Gober, Richard Wayne; University of Alabama TuscaloosaThe purpose of this study is to trace the historical development of that part of modern statistical procedures known as statistical inference. Although the application of statistical methods is concerned more than ever with the study of great masses of data, percentages, and columns of figures, statistics has moved far beyond the descriptive stage. Using concepts from mathematics, logic, economics, and psychology, modern statistics has developed into a designed "way of thinking" about conclusions or decisions to help a person choose a reasonable course of action under uncertainty. The general theory and methodology is called statistical inference.Item Distribution system design for omnichannel retailing(University of Alabama Libraries, 2019) Guo, Jia; Keskin, Burcu B.; University of Alabama TuscaloosaOmnichannel retailing - serving customers via a combination of physical stores and web-based stores- offers new opportunities and forces traditional retailers to rethink their supply chain design, operational efficiency, revenue/cost streams, and operations/marketing interface. While omnichannel supply chain management has received some attention recently, the role of cross-channel fulfillment, the layout of the omnichannel retail supply chain, and revenue management considering customer channel choice behavior have not been widely studied. This dissertation investigates these three streams in omnichannel supply chain design. In the cross-channel fulfillment stream, we study the optimal supply chain design for a dual-channel retailer that combines the operations of both channels in an omnichannel environment considering demand segmentation, cost structure, and more importantly, the execution ability of the firm. We formulate this problem as a two-stage stochastic programming model and use first-order optimality conditions to study the optimal inventory replenishment decisions and omnichannel strategy decisions under perfect and imperfect demand information. For the second chapter, we extend the dual-channel setting from a single store to N retail stores. We study the transshipment problem based on a two-store case by reformulating the problem into a large scale mixed-integer linear programming model. The third chapter addresses the revenue management stream by focuses on the interface between the retailer's operations and customer's demand. Specifically, this chapter explores the right role for a physical store in an omnichannel environment for an online-first retailer. The main result relates to the trade-off between the increased profits from the newly acquired demand (from the new channel) and the increased fulfillment and operations costs from cannibalized demand.Item Effective screening strategies for safe opening of universities under Omicron and Delta variants of COVID-19(Nature Portfolio, 2022) Rabil, Marie Jeanne; Tunc, Sait; Bish, Douglas R.; Bish, Ebru K.; Virginia Polytechnic Institute & State University; University of Alabama TuscaloosaAs new COVID-19 variants emerge, and disease and population characteristics change, screening strategies may also need to change. We develop a decision-making model that can assist a college to determine an optimal screening strategy based on their characteristics and resources, considering COVID-19 infections/hospitalizations/deaths; peak daily hospitalizations; and the tests required. We also use this tool to generate screening guidelines for the safe opening of college campuses. Our compartmental model simulates disease spread on a hypothetical college campus under co-circulating variants with different disease dynamics, considering: (i) the heterogeneity in disease transmission and outcomes for faculty/staff and students based on vaccination status and level of natural immunity; and (ii) variant- and dose-dependent vaccine efficacy. Using the Spring 2022 academic semester as a case study, we study routine screening strategies, and find that screening the faculty/staff less frequently than the students, and/or the boosted and vaccinated less frequently than the unvaccinated, may avert a higher number of infections per test, compared to universal screening of the entire population at a common frequency. We also discuss key policy issues, including the need to revisit the mitigation objective over time, effective strategies that are informed by booster coverage, and if and when screening alone can compensate for low booster coverage.Item Essays on Mixed-Fleet Green Vehicle Routing(University of Alabama Libraries, 2020) Koyuncu, Isil; Yavuz, Mesut; University of Alabama TuscaloosaThis work addresses a family of green vehicle routing problems. Three key operational characteristics distinguish alternative-fuel vehicles (AFVs) from gasoline or diesel vehicles (GDVs): (i) limited driving range before refueling is needed, (ii) scarce refueling infrastructure, and (iii) lengthy refueling times. The operational challenges in daily routing decisions faced by fleet managers and several key modeling aspects such as mixed fleets, refueling at customer and non-customer locations, and refueling policies are incorporated into the GVRP models. The first study compares two competing GVRP formulations, namely node- and arc-duplicating. Both formulations are strengthened via (i) two label setting algorithms to tighten the bounds, and (ii) improved lower bound on the number of routes. Through computational experiments based on two testbeds from the literature, the study concludes that the less common arc-duplicating formulation outperforms the more common node-duplicating formulation. The second study introduces an efficient solution framework by exploiting the route optimization outcome of the GDVs. We investigate the benefits of utilizing GDV optimal routes by quantifying the differences between AFV and GDV optimal routes and the solution times. Based on the results, three route optimization frameworks are proposed and implemented in a column generation algorithm. Based on data analysis, a solution methodology that potentially shortens the expected solution time is proposed. Finally, the third study introduces a novel profit-maximizing fleet mix and sizing with customer selection in green vehicle routing problem (GVRP). In addition to addressing operational challenges presented in the previous chapters, this study considers environmentally conscious customers who prefer receiving service with AFVs to reduce their supply chain carbon footprint and may have willingness-to-pay a green premium for it.Item Exact Tolerance Interval with Specified Tail Probabilities and a Control Chart for the Sample Variance(Wiley, 2022-12-02) Mosab Alqurashi; Subha Chakraborti; Chase HolcombeA 𝛽-content tolerance interval (TI) is a statistical interval which contains at least some fraction (proportion) of the population with a given confidence level. When we are interested in the precision of a quality characteristic, a TI for the sample variance is useful. In this paper, we consider an exact two-sided 𝛽-content TI for the sample variance from a normal distribution with a specified ratio of the tail probabilities. The proposed tolerance interval allows the practitioner more control over how the probabilities in the tails are distributed, which may be useful in certain applications. A comparison with an existing two-sided 𝛽-content TI shows that the proposed TI is better on the basis of expected coverage and standard deviation of the coverage. In addition, the proposed TI is shown to require fewer subgroups to achieve a specific accuracy level. Moreover, a phase II control chart with guaranteed performance is obtained from the proposed TI. Finally, a real and a simulated data are used for illustration.Item Feature selection for evolutionary Commercial-Off-The-Shelf software: studies focusing on time-to-market, innovation and hedonic-utilitarian trade-offs(University of Alabama Libraries, 2013) Kakar, Adarsh kumar Satindarlal; Hale, Joanne E.; University of Alabama TuscaloosaFeature selection is one of the most important decisions made by product managers. This three article study investigates the concepts, tools and techniques for making trade-off decisions of introducing new features in evolving Commercial-Off-The-Shelf (COTS) software products. The first article investigates the efficacy of various feature selection techniques when the trade-off is between comprehensiveness and time-to-market. The second article investigates the impact of current level of product performance when the trade-off is between providing different types of innovative features to the users. The third article investigates the impact on the ability of the COTS product to attract new users and retain existing users when the trade-off is between providing utilitarian and hedonic value through new product features. To meet these research goals an extensive multidisciplinary study of Information Systems (IS) and Product Development literatures was conducted followed by experimental research. The experiments were conducted among youth between 19-24 years who were users of Gmail software and produced some key findings. In the first study the Kano survey method was found to be effective in identifying those features which added value to the product and those that did not. This finding will facilitate product managers in using appropriate techniques for identifying the critical product features to be built into the COTS product thereby reducing time-to-market without sacrificing product quality. In the second study, current COTS product performance was found to significantly impact the type of innovation to be introduced into the COTS product. Basic or Core product innovations were found to have value for the users when performance is low but not when the performance is high. On the other hand, Expected or product Performance innovations and Augmented or user Excitement innovations were found to have value when the performance is high but not when the performance is low. In the third study, Hedonic value and Utilitarian value of product features were found to have distinctive impact on users. While Hedonic value impacted Word-of-Mouth, a measure of the products' capacity to attract new customers, Utilitarian value impacted User Loyalty, a measure of the products' capacity to retain existing customers.Item GA-Boost: a genetic algorithm for robust boosting(University of Alabama Libraries, 2012) Oh, Dong-Yop; Gray, J. Brian; University of Alabama TuscaloosaMany simple and complex methods have been developed to solve the classification problem. Boosting is one of the best known techniques for improving the prediction accuracy of classification methods, but boosting is sometimes prone to overfit and the final model is difficult to interpret. Some boosting methods, including Adaboost, are very sensitive to outliers. Many researchers have contributed to resolving boosting problems, but those problems are still remaining as hot issues. We introduce a new boosting algorithm "GA-Boost" which directly optimizes weak learners and their associated weights using a genetic algorithm, and three extended versions of GA-Boost. The genetic algorithm utilizes a new penalized fitness function that consists of three parameters (a, b, and p) which limit the number of weak classifiers (by b) and control the effects of outliers (by a) to maximize an appropriately chosen p-th percentile of margins. We evaluate GA-Boost performance with an experimental design and compare it to AdaBoost using several artificial and real-world data sets from the UC-Irvine Machine Learning Repository. In experiments, GA-Boost was more resistant to outliers and resulted in simpler predictive models than AdaBoost. GA-Boost can be applied to data sets with three different weak classifier options. We introduce three extended versions of GA-Boost, which performed very well on two simulation data sets and three real world data sets.Item Integrated supply chain models and analysis(University of Alabama Libraries, 2016) Zhi, Jianing; Keskin, Burcu Baris; University of Alabama TuscaloosaThis dissertation reports on three integrated supply chain problems that incorporate several key components of modern supply chains, including location, transportation, inventory, and customer selection. The first part of the dissertation investigates a multi-product, three-stage distribution system with transshipment and direct shipping from supply centers to customers. The objective is to determine the locations of a fixed number of capacitated warehouses to minimize the total transportation and fixed facility costs in the supply chain network. The second part of the dissertation focuses on the integrated location-inventory problem in a multi-retailer newsvendor setting with both decentralized and centralized decision making. The third part of the dissertation explores the coordination between operations management and marketing through an integration of marketing strategies and the inventory decisions to maximize the total expected profit of the company. The contribution of this dissertation is four-fold: First, we define two new problems with respect to integrated decision making in supply chain management: one combines inventory and location decisions based on two designs of supply chain network, and the other one studies the interface of operations management and marketing on top of a selective newsvendor problem with quantity dependent lead time. For both problems, we offer mathematical models and effective solution approaches. Second, we develop two meta-heuristic solution approaches for a multi-product production/distribution system design (PDSD) problem, which has been studied in literature and solved with Scatter search and Tabu search. We propose to solve the problem with two meta-heuristic procedures, simulated annealing (SA) and greedy randomized adaptive search procedure (GRASP), which demonstrate better solution quality and time performance compared to Scatter Search. Third, we establish a practical connection between operations and marketing in the selective newsvendor problem. This effort demonstrates that a joint decision-making process is more profitable, and opens up more cooperation opportunities between the two disciplines. Lastly, the proposed mathematical models, solution approaches, as well as managerial insights for either new problems or existing problems will potentially shed light on the research of problem variants and the development of new techniques beyond those considered in this dissertation.
- «
- 1 (current)
- 2
- 3
- »