Department of Information Systems, Statistics & Management Science
Permanent URI for this community
Browse
Browsing Department of Information Systems, Statistics & Management Science by Issue Date
Now showing 1 - 20 of 60
Results Per Page
Sort Options
Item Discernable Periods in the Historical Development of Statistical Inference(1967) Gober, Richard Wayne; University of Alabama TuscaloosaThe purpose of this study is to trace the historical development of that part of modern statistical procedures known as statistical inference. Although the application of statistical methods is concerned more than ever with the study of great masses of data, percentages, and columns of figures, statistics has moved far beyond the descriptive stage. Using concepts from mathematics, logic, economics, and psychology, modern statistics has developed into a designed "way of thinking" about conclusions or decisions to help a person choose a reasonable course of action under uncertainty. The general theory and methodology is called statistical inference.Item Workforce supply and facility location(University of Alabama Libraries, 2009) Palmer, Nathan Curtis; Sox, Charles R.; Mittenthal, John; University of Alabama TuscaloosaIt is important for every company to minimize its costs. This includes labor costs. We develop mathematical models that allow a company to minimize its labor costs by deciding from where to hire workers and the amount that will be paid to those workers within a similar region. These decisions are particularly important when the company has multiple facilities that compete amongst themselves for labor resources. In areas that are experiencing economic growth or in developing countries labor resources are limited and labor decisions are critical. With this motivation, this work investigates the labor and facility location decisions of a company that has decided to build many new facilities in close proximity to each other. One example is a large manufacturing firm that seeks simultaneously to locate a new assembly plant and supplier facilities. Concentrating all, or much, of the supply chain together will cause already limited labor resources to be depleted even further. Higher wages are paid and higher labor costs are incurred by the company, as a result. On the other hand, greater transportation costs are incurred as the distances between the plant and its suppliers increase. For each of the supply chain facilities, the location of the facility, the labor markets from which to hire workers, and the wages offered must be determined. While considering these decisions, another potential factor in choosing the location for each facility is the cost of the site. This dissertation introduces this real-world problem, formulates it mathematically, and provides managerial insights for companies faced with these decisions.Item Specification theory, patterns, and models in information systems domains: an exploratory investigation(University of Alabama Libraries, 2010) Woolridge, Richard William; Hale, David P.; University of Alabama TuscaloosaApplication and project domain specifications are an important aspect of Information Systems (IS) development. Observations of over thirty IS projects suggest dimly perceived structural patterns in specifications that are unaccounted for in research and practice. This investigation utilizes a theory building with case studies methodology to elucidate some of these patterns. As prerequisites to pattern identification, this investigation identified a theoretically and empirically grounded static model of specification that establishes specification context and structure and a theoretically and empirically grounded dynamic model of specification that establishes the principles of specification emergence and evolution. Using these models as a foundation, this investigation synthesized a specification pattern model from four research disciplines and confirmed the specification pattern model for physical objects in case data. An additional specification pattern model for physical actions was also identified in the case data. The confirmation process found that the physical object and action models could not be extended to abstract informational objects or actions. The findings of this investigation answer a call for research of the application domain, advance understanding of IS requirements inadequacy and volatility, advance ontological research to include a mechanism to integrate static and dynamic dimensions, and provides avenues of study to improve understanding of the structure and dynamics governing stakeholder collaboration. In addition, this investigation suggests criteria to judge, as well as theories, models, and patterns for, a class of IS that is persistently adaptive.Item Contributions to multivariate control charting: studies of the Z chart and four nonparametric charts(University of Alabama Libraries, 2010) Boone, Jeffrey Michael; Chakraborti, Subhabrata; University of Alabama TuscaloosaAutocorrelated data are common in today's process control applications. Many of these applications involve two or more related variables so that multivariate statistical process control (SPC) methods should be used in process monitoring since the relationship among the variables should be accounted for. Dealing with multivariate autocorrelated data poses many challenges. Even though no one chart is best for multivariate data, the Z chart proposed by Kalgonda and Kulkarni (2004) is fairly easy to implement and is particularly useful for its diagnostic ability, which is to pinpoint the variable(s) that is(are) out of control in case the chart signals. In this dissertation, the performance of the Z chart is compared to the chi-square chart and the multivariate EWMA (MEWMA) chart in a number of simulation studies. Simulations are also performed to study the effects of parameter estimation and non-normality (using the multivariate t and multivariate gamma distributions) on the performance of the Z chart. In addition to the problem of autocorrelation in multivariate quality control, in many quality control applications, the distribution assumption of the data is not met or there is not enough evidence showing that the assumption is met. In many situations, a control chart that does not require a strict distribution assumption, called a nonparametric or distribution-free chart, may be desirable. In this paper, four new multivariate nonparametric Shewhart control charts are proposed. They are relatively simple to use and are based on the multivariate forms of the sign and Wilcoxon signed-rank statistics and the maximum of multiple univariate sign and Wilcoxon signed-rank statistics. The performance of these charts is also studied. Illustrations and applications are also demonstrated.Item Model tree analysis with randomly generated and evolved trees (M-TARGET)(University of Alabama Libraries, 2010) Sasamoto, Mark Makoto; Gray, J. Brian; University of Alabama TuscaloosaTree structured modeling is a data mining technique used to recursively partition a data set into relatively homogeneous subgroups in order to make more accurate predictions on future observations. One of the earliest decision tree induction algorithms, CART (Classification and Regression Trees) (Breiman, Friedman, Olshen, and Stone 1984), had problems including greediness, split selection bias, and simplistic formation of classification and prediction rules in the terminal leaf nodes. Improvements are proposed in other algorithms including Bayesian CART (Chipman, George, and McCulloch 1998), Bayesian Treed Regression (Chipman, George, and McCulloch 2002), TARGET (Tree Analysis with Randomly Generated and Evolved Trees) (Fan and Gray 2005; Gray and Fan 2008), and Treed Regression (Alexander and Grimshaw 2006). TARGET, Bayesian CART, and Bayesian Treed Regression introduced stochastically driven search methods that explore the tree space in a non-greedy fashion. These methods enable the tree space to be searched with global optimality in mind, rather than following a series of locally optimal splits. Treed Regression and Bayesian Treed Regression feature the addition of models in the leaf nodes to predict and classify new observations instead of using the mean or weighted majority vote as in traditional regression and classification trees, respectively. This dissertation proposes a new method called M-TARGET (Model Tree Analysis with Randomly Evolved and Generated Trees) which combines the stochastic nature of TARGET with the enhancement of models in the leaf nodes to improve prediction and classification accuracy. Comparisons with Treed Regression and Bayesian Treed Regression using real data sets show favorable results with regard to RMSE and tree size, which suggests that M-TARGET is a viable approach to decision tree modeling.Item Network design models for intermodal logistics(University of Alabama Libraries, 2010) Ishfaq, Rafay; Sox, Charles R.; University of Alabama TuscaloosaThis research is motivated by the extraordinary increase in the use of intermodal shipments in recent years for both domestic and global movement of freight. Three mathematical models, which explore the dynamics of intermodal hub-and-spoke networks, are presented. These models use transportation costs, fixed location costs, modal connectivity costs, modal transit times and service time requirements, for three modes of transportation: road, rail and air. The objective is to compare the conventional over-the-road (OTR) networks with intermodal (IM) logistics networks over the financial, operational and service issues. This research extends the p-hub median approach of interacting hub network design to intermodal logistics domain. Intermodal shipments are classified as multiple class products with deterministic routings in a serial queuing network. Such an approach is used to model flow congestion at hubs and to study its impact on the hub network design. This research also develops and tests metaheuristic (Tabu Search) solution approaches. These solution approaches are benchmarked with tight lower bounds based on Lagrangian and linear relaxations. These benchmarking studies show that the metaheuristic solution approaches are suitable for solving large intermodal network design problems. In order to bring practical relevance to this work, real world data is used with the actual footprint of interstate highways, intermodal rail and air freight networks. Multiple research studies are conducted to gain useful insights into the issues related to the design and management of IM logistics networks. These studies show that the use of IM shipments can provide significant benefits in reducing total logistics costs. It is also shown that use of IM shipments requires re-alignment of a logistics network in terms of its network structure and service design. Overall, significant savings can be realized from the use of IM shipments, but operating an intermodal logistics network requires a high degree of planning and managerial control.Item Multivariate time series clustering using kernel variant multi-way principal component analysis(University of Alabama Libraries, 2010) Choi, Hwanseok; Hardin, J. Michael; University of Alabama TuscaloosaClustering multivariate time series data has been a challenging task for researchers since data has multiple dimensions to consider such as auto-correlations and cross-correlations whereas multivariate time series data has been prevailing in diverse areas for decades. However, for a short-period time series data, conventional time series modeling may not satisfy the model validity. Multi-way Principal Component Analysis can be used for this case, but the normality assumption can restrict to handle nonlinear data such as multivariate time series with high order interactions. Kernel variant MPCA will be proposed for an alternative solution for this case. To test if KMPCA can cluster trivariate time series data into two groups, two simulation studies were conducted. The first study has the same mean structure groups with error structures which are combinations of three different auto-correlation levels and three different cross-correlation levels. Two different mean structure groups with nine error structures were generated for the second study. To check the proposed method work well on a real-world data, Obesity-depression relationship study was done for a real-world data. The simulation studies showed that KMPCA cluster two different mean structure groups over 90% success rates when an appropriate kernel function with proper parameter was applied. Similar error structure will obstruct the clustering performance: strong cross-correlation, weak auto-correlation, and larger number of temporal points. Considering racial effect, obesity and obesity related variables, especially addictive material uses for 15 years can expect depressed cohorts at year 20 up to 76% for Caucasian group and 95% for African-American group.Item On the identification of statistically significant network topology(University of Alabama Libraries, 2010) Michaelson, Gregory Vincent; Perry, Marcus B.; University of Alabama TuscaloosaDetermining the structure of large and complex networks is a problem that has stirred great interest in many fields including mathematics, computer science, sociology, biomedical research, and epidemiology. Despite this high level of interest, though, there still exists no procedure for formal hypothesis testing to measure the significance of detected community structure in an observed network. First, this work proposes three, more general alternatives to modularity, the most common measure of community structure, which allow for the detection of more general structure in networks. An approach based upon the likelihood ratio test is shown not only to be as effective as modularity in detecting modular structure but also able to detect a wide variety of other network topologies. Second, this work proposes a general and novel test, the Likelihood Ratio Cluster (LRC) test, for assessing the statistical significance of the output of clustering algorithms. This technique is demonstrated by applying it to the sample partitions generated by both network and conventional clustering algorithms. Finally, a method for evaluating the capability of heuristic clustering techniques to detect the optimal sample partition is developed. This technique is used to evaluate several common community detection algorithms. Surprisingly, the most popular community detection algorithm is found to be largely ineffective at detecting the optimal partition of a random network. Also surprisingly, Clauset's fast algorithm (Clauset et al 2004), which is commonly thought to be fast but inaccurate, is found to be the most effective of the algorithms examined at detecting the optimal partition in random networks.Item Construction of estimation-equivalent second-order split-split-plot designs(University of Alabama Libraries, 2011) Yuan, Fang; Perry, Marcus B.; University of Alabama TuscaloosaIn many experimental settings, some experimental factors are very hard to change or very expensive to change, some factors are hard to change, and some factors are easy to change, which usually leads to a split-split-plot design. In such a case, there are randomization restrictions in our experiments. If the data is analyzed as if it were a completely randomized design, the results could be misleading. The analysis of split-split-plot designs is more complicated relative to the completely randomized design, as generalized least squares (GLS) is recommended for estimating the factor effects, and restricted maximum likelihood (REML) is recommended for estimating the variance components. As an alternative, one can consider estimation-equivalent designs, wherein ordinary least squares (OLS) and GLS estimates of the factor effects are equivalent. These designs provide practical benefits from the perspective of design selection and estimation and are consistent with traditional response surface methods. Although much work has been done with respect to estimation-equivalent second-order split-plot designs, less emphasis has been placed on split-split-plot (and higher strata) designs of this type. My research is to derive the general conditions for achieving OLS-GLS equivalence and use these conditions to construct balanced and unbalanced estimation-equivalent second-order split-split-plot designs from the central composite design (CCD).Item Contributions to outlier detection methods: some theory and applications(University of Alabama Libraries, 2011) Dovoedo, Yinaze Herve; Chakraborti, Subhabrata; University of Alabama TuscaloosaTukey's traditional boxplot (Tukey, 1977) is a widely used Exploratory Data Analysis (EDA) tools often used for outlier detection with univariate data. In this dissertation, a modification of Tukey's boxplot is proposed in which the probability of at least one false alarm is controlled, as in Sim et al. 2005. The exact expression for that probability is derived and is used to find the fence constants, for observations from any specified location-scale distribution. The proposed procedure is compared with that of Sim et al., 2005 in a simulation study. Outlier detection and control charting are closely related. Using the preceding procedure, one- and two-sided boxplot-based Phase I control charts for individual observations are proposed for data from an exponential distribution, while controlling the overall false alarm rate. The proposed charts are compared with the charts by Jones and Champ, 2002, in a simulation study. Sometimes, the practitioner is unable or unwilling to make an assumption about the form of the underlying distribution but is confident that the distribution is skewed. In that case, it is well documented that the application of Tukey's boxplot for outlier detection results in increased number of false alarms. To this end, in this dissertation, a modification of the so-called adjusted boxplot for skewed distributions by Hubert and Vandervieren, 2008, is proposed. The proposed procedure is compared to the adjusted boxplot and Tukey's procedure in a simulation study. In practice, the data are often multivariate. The concept of a (statistical) depth (or equivalently outlyingness) function provides a natural, nonparametric, "center-outward" ordering of a multivariate data point with respect to data cloud. The deeper a point, the less outlying it is. It is then natural to use some outlyingness functions as outlier identifiers. A simulation study is performed to compare the outlier detection capabilities of selected outlyingness functions available in the literature for multivariate skewed data. Recommendations are provided.Item Three inventory models for non-traditional supply chains(University of Alabama Libraries, 2011) Neve, Benjamin V.; Schmidt, Charles P.; University of Alabama TuscaloosaThis work considers three different non-traditional supply chain structures with similar demand and replenishment parameters, and similar solution techniques. In the first article, we develop an inventory model that addresses inventory rationing based on customer priority. We use the framework of a multi-echelon inventory system to describe the physics of a critical level policy. To extend from previous research, we allow multiple demand classes while minimizing a cost objective. We assume a continuous-review, base stock replenishment policy and allow for full backordering. Simulation is used to estimate total expected cost, applying variance reduction to reduce sampling error. First differences are estimated using a Perturbation Analysis unique to inventory rationing literature, heuristics are used to minimize costs. In the second article, we consider a stockless hospital supply chain with inaccurate inventory records. The model presented here is conditional on the level of accuracy in a particular hospital department, or point-of-use (POU). Similar to previous research on inventory inaccuracy, we consider both actual net inventory and recorded inventory in deriving the performance measures. The resultant model is a periodic-review, cost minimization inventory model with full backordering that is centered at the POU. Similar to the previous article, we assume a base stock ordering policy, but in addition to choosing the optimal order-up-to level, we seek the optimal frequency of inventory counts to reconcile inaccurate records. We present both a service level model and a shortage cost model under this framework. In the final article, we consider a hybrid hospital supply chain with both regular and emergency ordering when inventory records are inaccurate. The resultant model is an extension from the previous article where there are opportunities for both regular replenishments and emergency replenishments. We seek an optimal solution to an approximate cost model, and then we compare the results to a simulation-optimization approach.Item Three essays on the use of margins to improve ensemble methods(University of Alabama Libraries, 2012) Martinez Cid, Waldyn Gerardo; Gray, J. Brian; University of Alabama TuscaloosaEnsemble methods, such as bagging (Breiman, 1996), boosting (Freund and Schapire, 1997) and random forests (Breiman, 2001) combine a large number of classifiers through (weighted) voting to produce strong classifiers. To explain the successful performance of ensembles and particularly of boosting, Schapire, Freund, Bartlett and Lee (1998) developed an upper bound on the generalization error of an ensemble based on the margins, from which it was concluded that larger margins should lead to lower generalization error, everything else being equal (sometimes referred to as the "large margins theory"). This result has led many researchers to consider direct optimization of functions of the margins (see, e.g., Grove and Schuurmans, 1998; Breiman, 1999 Mason, Bartlett and Baxter, 2000; and Shen and Li, 2010). In this research, we show that the large margins theory is not sufficient for explaining the performance of AdaBoost. Shen and Li (2010) and Xu and Gray (2012) provide evidence suggesting that generalization error might be reduced by increasing the mean and decreasing the variance of the margins, which we refer to as "squeezing" the margins. For that reason, we also propose several alternative techniques for squeezing the margins and evaluate their effectiveness through simulations with real and synthetic data sets. In addition to the margins being a determinant of the performance of ensembles, we know that AdaBoost, the most common boosting algorithm, can be very sensitive to outliers and noisy data, since it assigns observations that have been misclassified a higher weight in subsequent runs. Therefore, we propose several techniques to identify and potentially delete noisy samples in order to improve its performance.Item GA-Boost: a genetic algorithm for robust boosting(University of Alabama Libraries, 2012) Oh, Dong-Yop; Gray, J. Brian; University of Alabama TuscaloosaMany simple and complex methods have been developed to solve the classification problem. Boosting is one of the best known techniques for improving the prediction accuracy of classification methods, but boosting is sometimes prone to overfit and the final model is difficult to interpret. Some boosting methods, including Adaboost, are very sensitive to outliers. Many researchers have contributed to resolving boosting problems, but those problems are still remaining as hot issues. We introduce a new boosting algorithm "GA-Boost" which directly optimizes weak learners and their associated weights using a genetic algorithm, and three extended versions of GA-Boost. The genetic algorithm utilizes a new penalized fitness function that consists of three parameters (a, b, and p) which limit the number of weak classifiers (by b) and control the effects of outliers (by a) to maximize an appropriately chosen p-th percentile of margins. We evaluate GA-Boost performance with an experimental design and compare it to AdaBoost using several artificial and real-world data sets from the UC-Irvine Machine Learning Repository. In experiments, GA-Boost was more resistant to outliers and resulted in simpler predictive models than AdaBoost. GA-Boost can be applied to data sets with three different weak classifier options. We introduce three extended versions of GA-Boost, which performed very well on two simulation data sets and three real world data sets.Item On the detection and estimation of changes in a process mean based on kernel estimators(University of Alabama Libraries, 2012) Mercado Velasco, Gary Ricardo; Perry, Marcus B.; University of Alabama TuscaloosaParametric control charts are very attractive and have been used in the industry for a very long time. However, in many applications the underlying process distribution is not known sufficiently to assume a specific distribution function. When the distributional assumptions underlying a parametric control chart are violated, the performance of the control chart could be potentially affected. Since robustness to departures from normality is a desirable property for control charts, this dissertation reports three separate papers on the development and evaluation of robust Shewhart-type control charts for both the univariate and multivariate cases. In addition, a statistical procedure is developed for detecting step changes in the mean of the underlying process given that Shewhart-type control charts are not very sensitive to smaller changes in the process mean. The estimator is intended to be applied following a control chart signal to aid in diagnosing root cause of change. Results indicate that methodologies proposed throughout this dissertation research provide robust in-control average run length, better detection performance than that offered by the traditional Shewhart control chart and/or the Hotelling's control chart, and meaningful change point diagnostic statistics to aid in the search for the special cause.Item Vehicle routing models in public safety and health care(University of Alabama Libraries, 2012) Li, Rong; Keskin, Burcu Baris; University of Alabama TuscaloosaRouting related costs constitute a significant portion of the overall logistics costs in most service industries. Private companies are continuously striving to reduce their vehicle routing costs to maintain a better standing in the competitive business world. In contrast, the public sector has not paid enough attention to its vehicle routing efficiency. Vehicle routing inefficiencies have resulted in wasting resources. Due to the recent funding cuts and economic hardship, public agencies need to improve their vehicle routing efficiencies. To aid public agencies in resolving inefficiencies in their operations, we propose some challenging vehicle routing problems in the public sector through mathematical modeling. To achieve this goal, in this dissertation, we study vehicle routing problems in two fields: i) vehicle routing model in public safety, i.e., how state troopers can patrol more efficiently and effectively on the roadways; and ii) vehicle routing model in health care, i.e., how caregivers are assigned to patients with home care needs and how they schedule their visit sequences. In the context of public safety, we present two models: i) a single-period, single-depot team orienteering problem with time windows and ii) a multi-period, multi-depot team orienteering problem with time windows. In the context of home health care, we present a multi-period, multi-depot vehicle routing problem with constraints specific to the health care industry. All of these models are mixed integer, and considered as computationally intractable. We solve them using either heuristics (local search, tabu search, simulated annealing) or decomposition method (column generation). Model one in public safety finds efficient patrolling plan with one single state trooper post. Model two in public safety improves the coverage of the roadway by allowing multiple state trooper posts. And, model three in home health care demonstrates improvements over the current practice with respect to the traveling cost and workload balance, and answers the question whether to invest in purchasing centrifuges.Item Contributions to joint monitoring of location and scale parameters: some theory and applications(University of Alabama Libraries, 2012) McCracken, Amanda Kaye; Chakraborti, Subhabrata; University of Alabama TuscaloosaSince their invention in the 1920s, control charts have been popular tools for use in monitoring processes in fields as varied as manufacturing and healthcare. Most of these charts are designed to monitor a single process parameter, but recently, a number of charts and schemes for jointly monitoring the location and scale of processes which follow two-parameter distributions have been developed. These joint monitoring charts are particularly relevant for processes in which special causes may result in a simultaneous shift in the location parameter and the scale parameter. Among the available schemes for jointly monitoring location and scale parameters, the vast majority are designed for normally distributed processes for which the in-control mean and variance are known rather than estimated from data. When the process data are non-normally distributed or the process parameters are unknown, alternative control charts are needed. This dissertation presents and compares several control schemes for jointly monitoring data from Laplace and shifted exponential distributions with known parameters as well as a pair of charts for monitoring data from normal distributions with unknown mean and variance. The normal theory charts are adaptations of two existing procedures for the known parameter case, Razmy's (2005) Distance chart and Chen and Cheng's (1998) Max chart, while the Laplace and shifted exponential charts are designed using an appropriate statistic for each parameter, such as the maximum likelihood estimators.Item Three essays on improving ensemble models(University of Alabama Libraries, 2013) Xu, Jie; Gray, J. Brian; University of Alabama TuscaloosaEnsemble models, such as bagging (Breiman, 1996), random forests (Breiman, 2001a), and boosting (Freund and Schapire, 1997), have better predictive accuracy than single classifiers. These ensembles typically consist of hundreds of single classifiers, which makes future predictions and model interpretation much more difficult than for single classifiers. Breiman (2001b) gave random forests a grade of A+ in predictive performance, but a grade of F in interpretability. Breiman (2001a) also mentioned that the performance of an ensemble model depends on the strengths of the individual classifiers in the ensemble and the correlations among them. Reyzin and Schapire (2006) stated that "the margins explanation basically says that when all other factors are equal, higher margins result in lower error," which is referred to as the "large margin theory." Shen and Li (2010) showed that the performance of an ensemble model is related to the mean and the variance of the margins. In this research, we improve ensemble models from two perspectives, increasing the interpretability and/or decreasing the test error rate. We first propose a new method based on quadratic programming that uses information on the strengths of the individual classifiers in the ensemble and their correlations, to improve or maintain the predictive accuracy of an ensemble while significantly reducing its size. In the second essay, we improve the predictive accuracy of random forests by adding an AdaBoost-like improvement step to random forests. Finally, we propose a method to improve the strength of the individual classifiers by using fully-grown trees fitted on weighted resampling training data and then combining the trees by using the AdaBoost method.Item Feature selection for evolutionary Commercial-Off-The-Shelf software: studies focusing on time-to-market, innovation and hedonic-utilitarian trade-offs(University of Alabama Libraries, 2013) Kakar, Adarsh kumar Satindarlal; Hale, Joanne E.; University of Alabama TuscaloosaFeature selection is one of the most important decisions made by product managers. This three article study investigates the concepts, tools and techniques for making trade-off decisions of introducing new features in evolving Commercial-Off-The-Shelf (COTS) software products. The first article investigates the efficacy of various feature selection techniques when the trade-off is between comprehensiveness and time-to-market. The second article investigates the impact of current level of product performance when the trade-off is between providing different types of innovative features to the users. The third article investigates the impact on the ability of the COTS product to attract new users and retain existing users when the trade-off is between providing utilitarian and hedonic value through new product features. To meet these research goals an extensive multidisciplinary study of Information Systems (IS) and Product Development literatures was conducted followed by experimental research. The experiments were conducted among youth between 19-24 years who were users of Gmail software and produced some key findings. In the first study the Kano survey method was found to be effective in identifying those features which added value to the product and those that did not. This finding will facilitate product managers in using appropriate techniques for identifying the critical product features to be built into the COTS product thereby reducing time-to-market without sacrificing product quality. In the second study, current COTS product performance was found to significantly impact the type of innovation to be introduced into the COTS product. Basic or Core product innovations were found to have value for the users when performance is low but not when the performance is high. On the other hand, Expected or product Performance innovations and Augmented or user Excitement innovations were found to have value when the performance is high but not when the performance is low. In the third study, Hedonic value and Utilitarian value of product features were found to have distinctive impact on users. While Hedonic value impacted Word-of-Mouth, a measure of the products' capacity to attract new customers, Utilitarian value impacted User Loyalty, a measure of the products' capacity to retain existing customers.Item Coordination of pricing, sourcing and product design decisions in a supply chain(University of Alabama Libraries, 2013) Liu, Bing; Sox, Charles R.; University of Alabama TuscaloosaSupply chain management is more than a movement advocated by a few pioneers today. It is a strategic as well as operational revolution for business operations. Better understanding of the underlying principles of supply chain scenarios helps the widely acceptance that coordination along supply chain links can achieve a win-win situation. In the meanwhile, the penetration of Internet and information technology into every corner of our life enables a large scale of interaction and cooperation between multiple business units and/or end consumers. Supply chain management has gone beyond the early stage when basic cases are discussed for pedagogical purpose. More realistic application scenarios are yet to be identified and investigated. Supply chain performance can be affected by a large set of factors. A variety of supply chain scenarios are in operation. This dissertation is intended to identify a number of typical application scenarios and focus on decision issues integrating pricing, sourcing, product attributes, evolving partnerships between firms. It consists of three manuscripts to be submitted for journal publication. The first paper studies the coordination of pricing and sourcing in two scenarios where multiple suppliers and multiple retail channels are involved. This study investigates the cross price effects between retail channels on the overall profitability. In addition to the analytical analysis, a number of numerical experiments are conducted to investigate realistic issues decision makers may encounter. The second paper studies a supply chain in which a manufacturer sells a configurable product through a retailer. We take the configurable product as a parameterized product and examine the impact of such a feature decision on equilibrium between the manufacturer and the retailer. The analysis considers three different cost functions: (1) linear cost function, (2) quadratic cost function and (3) exponential cost function and examines how the cost functions affect the optimal solutions. The third paper presents a novel framework in which supply chain structure evolves from one stage to the next in terms of changing memberships and business partnerships between members. All of the research work so far on supply chain coordination assumes a static supply chain structure, which remains the same throughout the sequence of events. In reality, however, supply chain memberships and the partnerships between those member firms are not all established at one time and the supply chain structure is subject to evolution. Firms may join or leave a supply chain network and the partnerships between two firms may change as well. Among a large variety of possible evolving scenarios, this research focuses on a case in which a second manufacturer joins a supply chain initially established with one manufacturer and one retailer. The supply chain evolves from the stage with one partnership to the next stage with two partnerships. The two partnerships are established one after another on two competing and differentiated products. Compatibility between demand models in two stages are established. Based on the analytical non-closed form solution, a number of numerical experiments are developed to demonstrate the impact of the introduction of a competing product on the optimal solution of product one. This dissertation conducts a series of exploratory research works on supply chain coordination. It considers coordination not only between multiple players but having multiple strategies involved. It considers products specified with not only price but configurable attributes. It puts forward a novel framework, evolving supply chain, in contrast to static supply chain, the typical research subject in the literature. This dissertation provides a set of insightful results, which help better understand the underlying coordination mechanism and help make wise decisions as needed.Item Coordination of pricing, advertising, and production decisions for multiple products(University of Alabama Libraries, 2013) Bajwa, Ahmad Naeem; Sox, Charles R.; University of Alabama TuscaloosaThis research aims to develop and propose mathematical models that can be used to facilitate cross-functional coordination between operations and marketing. We consider a dynamic problem of joint pricing, advertising, and production decisions for a profit maximizing firm that produces multiple products. We assume the firm operates in monopolistic environment, where demand for its products is a function of price and advertising expenditure. We model the problem as a mixed integer nonlinear program, incorporating capacity constraints, setup costs, and demand seasonality. We first model and solve the pricing problem without advertising. Later, we extend the model to include advertising as decision variables. The demand for each product is assumed to be continuous, differentiable, strictly decreasing in price and concave in advertising. We present a solution approach which can be used for constant pricing, as well as dynamic pricing strategies. Furthermore, the solution approach is more general and applicable for linear as well as nonlinear demand functions. Using real world data from a manufacturer, we create problem instances, for different demand scenarios at different capacities, and solve for optimal prices for each strategy. We present analytical results that provide managerial insights on how the optimal prices change for different production plans and at different capacities. We compare the firm's profitability for the two pricing strategies, and show that dynamic pricing is valuable at low capacities and when at least one of the products has peak demand in the beginning of the planning horizon. We show that the optimal allocation of advertising budget across products does not change with budget changes. Moreover, the change is minimal with changes in demand seasonality. It is optimal to increase advertising in periods of higher demand and decrease in periods of lower demand. Hence, firms can use rules of thumb and need not to frequently review the allocation. Numerical results show that the proposed algorithms have good convergence properties. Finally, as it is clear from review of academic literature; there are no decision support systems that truly integrate the production/inventory and pricing decisions - specially for multi-product problems. We believe, this work makes valuable contributions in developing solution methodologies that can be incorporated in such decision support systems.
- «
- 1 (current)
- 2
- 3
- »