Theses and Dissertations - Department of Mathematics
Permanent URI for this collection
Browse
Browsing Theses and Dissertations - Department of Mathematics by Subject "Applied mathematics"
Now showing 1 - 16 of 16
Results Per Page
Sort Options
Item Asymptotic analysis of mass-dominated convection in a nanofluid(University of Alabama Libraries, 2014) Dar Assi, Mahmoud H.; Hadji, Layachi; University of Alabama TuscaloosaThe threshold conditions for the onset of convection in colloidal suspensions is investigated using the particulate medium formulation. We consider a dilute liquid suspension of solid spherical particles that is confined between two horizontal plates of infinite extent placed at the vertical coordinates Z=0 and at Z=H. The plates are assumed to be rigid, perfectly conducting and impermeable to mass flow. The suspension is heated from below. A quasi-Boussinesq approximation has been adopted i.e. the density will be assumed constant except in the gravity term where it depends on both temperature and concentration. But both the fluid viscosity and the coefficient of the particle diffusion are allowed to depend on the particle concentration through the Einstein formula for the dilute case and through the general empirical formula of suspension viscosity μ=μ0(1-C/CM)-2, where μ0 is the dynamic viscosity of the base fluid and CM is the maximum packing volume fraction of hard-sphere particles suspension for the moderately concentrated case. An experimental parameter, β, is introduced to depict the coupled effects of thermophoresis, sedimentation and particle diffusion. For a given experimental setup, β is a function of the particle size. The graph of β as function of the particle radius is an inverted parabola with two zero crossings. The first zero crossing occurs near zero particle radius. The second zero crossing occurs at larger size particle radius, although still in the nanosize range.Item Augmented Lagrangian method for Euler's elastica based variational models(University of Alabama Libraries, 2016) Chen, Mengpu; Zhu, Wei; University of Alabama TuscaloosaEuler's elastica is widely applied in digital image processing. It is very challenging to minimize the Euler's elastica energy functional due to the high-order derivative of the curvature term. The computational cost is high when using traditional time-marching methods. Hence developments of fast methods are necessary. In the literature, the augmented Lagrangian method (ALM) is used to solve the minimization problem of the Euler's elastica functional by Tai, Hahn and Chung and is proven to be more efficient than the gradient descent method. However, several auxiliary variables are introduced as relaxations, which means people need to deal with more penalty parameters and much effort should be made to choose optimal parameters. In this dissertation, we employ a novel technique by Bae, Tai, and Zhu, which treats curvature dependent functionals using ALM with fewer Lagrange multipliers, and apply it for a wide range of imaging tasks, including image denoising, image inpainting, image zooming, and image deblurring. Numerical experiments demonstrate the efficiency of the proposed algorithm. Besides this, numerical experiments also show that our algorithm gives better results with higher SNR/PSNR, and is more convenient for people to choose optimal parameters.Item A case-study of using tensors in multi-way electroencephalograme (EEG) data analysis(University of Alabama Libraries, 2020) Milligan-Williams, Essence; Sidje, Roger B.; University of Alabama TuscaloosaTensors are multi-dimensional arrays that can represent large datasets. Acquiring large data sets has its pros and cons; a pro being the bigger the data set, the more information could potentially be generated. A con would be the amount of labor needed to process this information. To combat this con, researchers in a variety of fields rely on tensor decomposition. Tensor decomposition's goal is to compress the data without losing any signficant information. Tensor decompositon is also known for its ability to extract underlying features that could not have been seen at face value. One such field is electroencephalography, which is the study of electrograms (EEG). An electrogram is a brain imaging tool that measures brain electrical activity. Having the abilitiy to be continously recorded for long periods of time, this could be hours, days, even weeks, EEG tends to have massive multi-dimensional datasets. In order to process the data, tensor decompositon methods such as Parallel Factor Analysis (PARAFAC) and Tucker decomposition can be executed on these large datasets.Item Efficient approximation of the stationary solution to the chemical master equation(University of Alabama Libraries, 2019) Reid, Brandon M.; Sidje, Roger B.; University of Alabama TuscaloosaWhen studying chemical reactions on the cellular level, it is often helpful to model the system using the continuous-time Markov chain (CTMC) that results from the chemical master equation (CME). It is frequently instructive to compute the probability distribution of this CTMC at statistical equilibrium, thereby gaining insight into the stationary, or long-term, behavior of the system. Computing such a distribution directly is problematic when the state space of the system is large. To alleviate this difficulty, it has become popular to constrain the computational burden by using a finite state projection (FSP), which aims only to capture the most likely states of the system, rather than every possible state. We propose efficient methods to further narrow these states to those that remain highly probable in the long run, after the transient behavior of the system has dissipated. Our strategy is to quickly estimate the local maxima of the stationary distribution using the reaction rate formulation, which is of considerably smaller size than the full-blown chemical master equation, and from there develop adaptive schemes to profile the distribution around the maxima. The primary focus is on constructing an efficient FSP; however, we also examine how some of our initial estimates perform on their own and discuss how they might be applied to tensor-based methods. We include numerical tests that show the efficiency of our approaches.Item Inexact methods for the chemical master equation with constant or time-varying propensities, and application to parameter inference(University of Alabama Libraries, 2018) Dinh, Khanh Ngoc; Sidje, Roger B.; University of Alabama TuscaloosaComplex reaction networks arise in molecular biology and many other different fields of science such as ecology and social study. A familiar approach to modeling such problems is to find their master equation. In systems biology, the equation is called the chemical master equation (CME), and solving the CME is a difficult task, because of the curse of dimensionality. The goal of this dissertation is to alleviate this curse via the use of the finite state projection (FSP), in both cases where the CME matrix is constant (if the reaction rates are time-independent) or time-varying (if the reaction rates change over time). The work includes a theoretical characterization of the FSP truncation technique by showing that it can be put in the framework of inexact Krylov methods that relax matrix-vector products and compute them expediently by trading accuracy for speed. We also examine practical applications of our work in delay CME and parameter inference through local and global optimization schemes.Item Matched interface and boundary enhanced multiresolution time-domain algorithm for electromagnetic simulations(University of Alabama Libraries, 2011) Yao, Pengfei; Zhao, Shan; University of Alabama TuscaloosaThe present work introduces a new boundary closure treatment for the wavelet based multiresolution time-domain (MRTD) solution of Maxwell's equations [1]. Accommodating nontrivial boundary conditions, such as the Robin condition or time dependent condition, has been a challenging issue in the MRTD analysis of wave scattering, radiation, and propagation. A matched interface and boundary multiresolution time-domain (MIBMRTD) method is introduced to overcome this difficulty. Several numerical benchmark tests are carried out to valid the MIB-MRTD method. Dispersion and stability analysis for the MIB-MRTD method are conducted and compared with the high-order finite difference time-domain (FDTD) method . The proposed boundary treatment can also be applied to other high order approaches, such as the dispersion-relation-preserving (DRP) method. The MIB boundary scheme greatly enhances the feasibility for applying the MRTD methods to more complicated electromagnetic structures.Item Mathematically modeling the spread of methamphetamine use(University of Alabama Libraries, 2014) Bucher, Bernadette Kathleen; Moen, Kabe; University of Alabama TuscaloosaThe use of methamphetamine is rising faster than most other hard drugs such as cocaine and heroine. To date, mathematical models have not been used to explore the dynamics of methamphetamine use in a population. We propose five mathematical models that can predict and evaluate methamphetamine use: a compartmental model for rural areas, a compartmental model for urban areas, an optimal control model for rural areas, an optimal control model for urban areas, and a metapopulation model. Both the optimal control and metapopulation models are built by extending the proposed compartmental structures. We separate models for urban and rural regions due to differing community characteristics that effect the manner in which methamphetamine is brought into and distributed throughout populations. Similar to models for the spread of infectious diseases, the interaction between susceptible, using, dealing, and recovered individuals in our illicit drug using population acts as a mechanism for the spread of methamphetamine use in each of our models. Thus, we use many techniques from infectious disease modeling literature in the analysis of our models. We also consider several applications of our models to data on methamphetamine use from Hawaii and Missouri. Our models give several important insights to previously observed yet unexplained characteristics regarding the dynamics of methamphetamine spread and the distribution of its use throughout the United States.Item Metabolic network inference with the graphical lasso(University of Alabama Libraries, 2015) Aicher, Joseph Krittameth; Song, Song; Reed, Laura K.; University of Alabama TuscaloosaMetabolic networks describe the interactions and reactions between different metabolites (e.g. sugars, fatty acids, amino acids) in a biological system, which together give rise to the chemical processes which make life possible. Efforts to further knowledge of the structure of metabolic networks have taken place for well over a century through the efforts of numerous biochemists and have revolutionized our understanding of biology and the capabilities of modern medicine. The introduction in the recent past of metabolomics technologies, which allow for the simultaneous measurement of the concentrations of a significant number of metabolites, has led to the development of mathematical and statistical algorithms that aim to use the information and data that these technologies have made available to make inferences about the structure of metabolic networks. In this thesis, I investigate the application of the graphical lasso algorithm to metabolomics data for the purposes of metabolic network inference. I use the graphical lasso on a metabolomics dataset collected by gas chromatography-mass spectrometry from Drosophila melanogaster to estimate graphical models of varying levels of sparsity that describe the conditional dependence structure of the observed metabolite concentrations. With these estimated models, I describe how they can be chosen from and interpreted in the context of both the data and the underlying biology to inform our knowledge of metabolic network structure.Item Parallel stochastic simulation of biochemical reaction systems(University of Alabama Libraries, 2019) Cook, Keisha; Sidje, Roger B.; University of Alabama TuscaloosaChemical reactions of various scales occur in nature and in our bodies. As technology has improved, researchers have gained access to in-depth knowledge about the relationships between the moving parts of a chemical reaction system. This has led to a multitude of studies by researchers who strive to understand the background and behavior of these systems both experimentally and mathematically. Computational biology allows us the opportunity to study chemical processes from a model-based approach, in which algorithms are used to simulate and interpret biological systems to validate our models with data when available. A number of biological processes such as interactions between molecules, cells, organs, and tissues in the body can be modeled mathematically, making it useful in medicine, biology, chemistry, biophysics, statistics, genomics, and more. Mathematically, biochemical processes can be modeled deterministically and stochastically. The Reaction Rate Equations (RREs), in the form of a system of ODEs, are used to model deterministically. The Chemical Master Equation (CME), in the form of a Markov Chain, is used to solve stochastically. When the CME becomes computationally expensive, methods such as the Stochastic Simulation Algorithm (SSA), the Tau-Leap Method (Tau-Leap), the First Reaction Method (FRM), and the Delay Stochastic Simulation Algorithm (DSSA) are used to simulate the change in population of the species in a system over time. For accuracy when examining the resulting data, models are simulated many times in order to produce probability distributions of the involved species. An increase in the size and complexity of a system, leads to an increase in the computational time needed to simulate a model. Parallel processing is used to speed up the computational time of simulating biochemical processes via the aforementioned methods. The numerical results can be illustrated for various models found in science.Item Performance evaluation of inexact GMRES(University of Alabama Libraries, 2011) Winkles, Nathan; Sidje, Roger B.; University of Alabama TuscaloosaIterative methods are aimed at sparse linear systems that arise in many applications (e.g., PDEs, biology, computer science, technology, engineering, etc). These applications give rise to matrices that differ in terms of structure and characteristics, and these ultimately impact the solvers. The Generalized Minimal Residual Method (GMRES) is a widely used solver due to its robustness. The inexact GMRES algorithm is a variant of the GMRES algorithm where matrix-vector products are performed inexactly, either out of necessity or deliberately, in view of trading accuracy for speed. Recent studies have shown that relaxing matrix-vector products this way can be justified theoretically and experimentally. Research so far has focused on decreasing the workload per iteration without significantly affecting the accuracy. But relaxing the accuracy per iteration is susceptible to increase the number of iterations, thereby increasing the overall runtime, which could potentially end up greater than that of the exact GMRES if there were not enough savings in the matrix-vector products. In this dissertation, we assess the benefit of the inexact approach in terms of actual CPU time derived from realistic problems, and we provide cases that shed instructive insights into results affected by the buildup of the inexactness. Such information is of vital importance to practitioners who need to decide if switching their vested work-flow to the inexact approach is worth the effort and the risk that might come with it. Our assessment is drawn from extensive numerical experiments on the Alabama Supercomputing Facility that gauge the effectiveness of the inexact scheme and its suitability to certain problems depending on how much inexactness is allowed in the matrix-vector products. We present many different applications throughout this dissertation and we show different structures and characteristics of matrices which are useful in the sense that linear system solvers sometimes do not converge to the correct solution if the matrices do not have specific properties. We apply some incomplete preconditioning techniques to our inexact scheme and we show that we could accelerate the convergence or even recover convergence that was lost from the restarted GMRES.Item Pseudo-transient ghost fluid methods for the Poisson-Boltzmann equation with a two-component regularization(University of Alabama Libraries, 2019) Ahmed Ullah, Sheik; Zhao, Shan; University of Alabama TuscaloosaThe Poisson Boltzmann equation (PBE) is a well-established implicit solvent continuum model for the electrostatic analysis of solvated biomolecules. The numerical solution of the nonlinear PBE is still a challenge due to its strong singularity by the source terms, dielectrically distinct regions, and exponential nonlinear terms. In this dissertation, a new alternating direction implicit method (ADI) is proposed for solving the nonlinear PBE using a two-component regularization. This scheme inherits all the advantages of the two-component regularization and the pseudo-time solution of the PBE while possesses a novel approach to combine them. A modified Ghost Fluid Method (GFM) has been introduced to incorporate the nonzero jump condition into the ADI framework to construct a new GFM-ADI method. It produced better results in terms of spatial accuracy and stability compared to the existing ADI methods for PBE and it is simpler to implement by circumventing the work necessary to apply the rigorous 3D interface treatments with the regularization. Moreover, the stability of the GFM-ADI method has been significantly improved in comparing with the non-regularized ADI method, so that stable and efficient protein simulations can be carried out with a pretty large time step size. Two locally one-dimensional (LOD) methods have also been developed for the time-dependent regularized PBE, which are unconditionally stable. Finally, for numerical validation, we have evaluated the solvation free energy for a collection of 24 proteins with various sizes and the salt effect on the protein-protein binding energy of protein complexes.Item Sparse regression of textual analysis(University of Alabama Libraries, 2018) Carter, Phylisicia N.; Ames, Brendan; University of Alabama TuscaloosaWe consider sparse regression techniques as tools for classification of sentiment within Twitter posts. Analysis of Twitter usage suffers from several unique challenges. For example, the 140-character limit severely limits the amount of information contained in each post; this causes most tweets to contain an extremely small subset of the dictionary, presenting challenges for learning schemes based on dictionary usage. To remedy this undersampling issue, we propose usage of penalized regression. Here, we employ logistic regularization to avoid any degeneracy caused by the sparse usage of the dictionary in each tweet, while simultaneously learning which terms are most associated with each sentiment. Accelerated sparse discriminant analysis is also used to combat the issues of degeneracy and overfitting of the training data while providing dimension reduction. As illustrative examples, we employ sparse logistic regression to classify tweets based on the users’ perception of a connection between vaccination and autism, and we examine the Twitter users' sentiment of the use of autonomous cars.Item Stability analysis of a bilayer contained within a cylindrical tube(University of Alabama Libraries, 2015) Song, Yuanyuan; Halpern, David; University of Alabama TuscaloosaAirways in the lung are coated with a liquid bilayer consisting of a serous layer adjacent to a more viscous mucus layer which is contiguous with the air core. An instability due to surface tension at the interfaces may lead to the formation of a liquid plug that blocks the passage of air. This is known as airway closure. A stability analysis is carried out for the case when a Newtonian and immiscible liquid bilayer coats a compliant tube in the presence of an insoluble surfactant monolayer at the mucus-gas interface. A surface active material such as surfactant lowers the surface tension and also generates a surface stress at the interface, both of which are stabilizing, while the wall compliance may accelerate the formation of the liquid bridge. A system of nonlinear coupled equations for the deflections of the interfaces and the surfactant concentration is derived by using an extended lubrication theory analysis. A linear stability study using normal modes is conducted by linearizing the nonlinear evolution equations. A linear eigenvalue problem for the perturbation amplitudes is obtained. Non-trivial solutions are obtained provided the determinant of a linear system is singular. A fourth order polynomial for the growth rate of the disturbances is derived, whose coefficients depend on the wavenumber of the perturbation, the wall characteristics, the Marangoni number, the thickness of the bilayer, the aspect thickness ratio, the viscosity ratio of two liquid layers, and the surface tension ratio. Both stabilizing and destabilizing effects of various system parameters are investigated. A classical lubrication theory model is also derived for cases where a bilayer coats a rigid tube with insoluble surfactant along the liquid-gas interface, and a bilayer coating in a compliant tube with a clean liquid-gas interface. Results serve as a validation of the extended lubrication theory model. The accuracy of the extended lubrication theory model as the bilayer thickness increases is tested by considering a more general approach that is valid for arbitrary bilayer thickness. A system of two Orr-Sommerfeld equations is obtained using this more general approach, and together with the boundary conditions yields an eigenvalue problem for the growth rate. Validations and comparisons with lubrication theory models (both extended and classical ones) are provided. The nonlinear evolution equations are also solved numerically beyond the linear regime for the case of a bilayer coating a compliant tube together with surfactant along the mucus-gas interface in the last part of this thesis. These equations are solved numerically using the method of lines. Numerical results show that the closure time, that is the time requires for a liquid plug to form, goes up with Marangoni number. It is well known that for a single layer, the closure time can increase by a factor of four or five due to surfactant which immobilizes the gas-liquid interface. However, for a bilayer, surfactant may delay closure by a factor of twenty or more.Item Structural validity and reliability of two observation protocols in college mathematics(University of Alabama Libraries, 2017) Watley, Laura Erin; Gleason, Jim; University of Alabama TuscaloosaUndergraduate mathematics education is being challenged to improve, with peer evaluation, student evaluations, and portfolio assessments as the primary methods of formative and summative assessment used by instructors. Observation protocols like the Mathematics Classroom Observation Protocol for Practices (MCOP^2) and the abbreviated Reformed Teaching Observation Protocol (aRTOP) are another alternative. However, before these observation protocols can be used in the classroom with confidence, a study needed to be conducted to examine both the aRTOP and the MCOP^2. This study was conducted at three large doctorate-granting universities and eight masters and baccalaureate institutions. Both the aRTOP and the MCOP^2 were evaluated in 110 classroom observations during the Spring 2016, Fall 2016, and Spring 2017 semesters. The data analysis allowed conclusions regarding the internal structure, internal reliability, and relationship between the constructs measured by both observation protocols. The factor loadings and fit indices produced from a Confirmatory Factor Analysis (CFA) found a stronger internal structure of the MCOP^2. Cronbach's alpha was also calculated to analyze the internal reliability for each subscale of both protocols. All alphas were in the satisfactory range for the MCOP^2 and below the satisfactory range for the aRTOP. Linear Regression analysis was also conducted to estimate the relationship between the constructs of both protocols. We found a positive and strong correlation between each pair of constructs with a higher correlation between subscales that do not contain Content Propositional Knowledge. This leads us to believe that Content Propositional Knowledge of the aRTOP is measuring something completely different, but not very well, and it needs to be assessed using another method. As noted above and detailed in the body of the work, we find support for the Mathematics Classroom Observation Protocol for Practices (MCOP^2) as a useful assessment tool for undergraduate mathematics classrooms.Item A super-Gaussian Poisson-Boltzmann model for electrostatic solvation energy calculation: smooth dielectric distributions for protein cavities and in both water and vacuum states(University of Alabama Libraries, 2018) Hazra, Tania; Zhao, Shan; University of Alabama TuscaloosaCalculations of electrostatic potential and solvation energy of macromolecules are essential for understanding the mechanism of many biological processes. In the classical implicit solvent Poisson-Boltzmann (PB) model, the macromolecule and water are modeled as two-dielectric media with a sharp border. However, the dielectric property of interior cavities and ion-channels is difficult to model in a two-dielectric setting. In fact, whether there are water molecules or cavity-fluid inside a protein cavity remains to be an experimental challenge. Physically, this uncertainty affects the subsequent solvation free energy calculation. In order to compensate this uncertainty, a novel super-Gaussian dielectric PB model is introduced in this work, which devices an inhomogeneous dielectric distribution to represent the compactness of atoms and characterize empty cavities via a gap dielectric value. Moreover, the minimal molecular surface level set function is adopted so that the dielectric profile remains to be smooth when the protein is transfer from water phase to vacuum. A nice feature of this new model is that as the order of super-Gaussian function approaches the infinity, the dielectric distribution reduces a piecewise constant of the two-dielectric model. Mathematically, a simple effective dielectric constant analysis is introduced in this work to benchmark the dielectric model and select optimal parameter values. Computationally, a pseudo-time alternative direction implicit (ADI) algorithm is utilized for solving the super-Gaussian PB equation, which is found to be unconditionally stable in a smooth dielectric setting. Solvation free energy calculation of a Kirkwood sphere and various proteins is carried out to validate the super-Gaussian model and ADI algorithm. One macromolecule with both cavity-fluids and empty cavities is employed to demonstrate how the cavity uncertainty in protein structure can be bypassed through dielectric modeling in the biomolecular electrostatic analysis.Item Unconditionally stable time splitting methods for the electrostatic analysis of solvated biomolecules(University of Alabama Libraries, 2015) Wilson, Leighton Wayne; Zhao, Shan; University of Alabama TuscaloosaThis work introduces novel unconditionally stable operator splitting methods for solving the time dependent nonlinear Poisson-Boltzmann (NPB) equation for the electrostatic analysis of solvated biomolecules. In a pseudo-transient continuation solution of the NPB equation, a long time integration is needed to reach the steady state. This calls for time stepping schemes that are stable and accurate for large time increments. The existing alternating direction implicit (ADI) methods for the NPB equation are known to be conditionally stable, although fully implicit. To overcome this difficulty, we propose several new operator splitting schemes, including the multiplicative locally one-dimensional (LOD) schemes and additive operator splitting (AOS) schemes. The nonlinear term is integrated analytically in these schemes, while standard discretizations with finite differences in space and implicit time integrations are used. The proposed schemes become much more stable than the ADI methods, and some of them are indeed unconditionally stable in dealing with solvated proteins with source singularities and non-smooth solutions. Numerically, the orders of convergence in both space and time are found to be one. Nevertheless, the precision in calculating the electrostatic free energy is low, unless a small time increment is used. Further accuracy improvements are thus considered, through constructing a Richardson extrapolation procedure and a tailored recovery scheme that replaces the fast Fourier transform method by the operator splitting method in the vacuum case. After acceleration, the optimized LOD method can produce a reliable energy estimate by integrating for a small and fixed number of time steps. Since one only needs to solve a tridiagonal linear system in each independent one dimensional process, the overall computation is very efficient. The unconditionally stable LOD method scales linearly with respect to the number of atoms in the protein studies, and is over 20 times faster than the conditionally stable ADI methods. In addition, some preliminary results on increased stability for ADI methods using a regularization scheme are presented.