Department of Computer Science
Permanent URI for this community
Browse
Browsing Department of Computer Science by Title
Now showing 1 - 20 of 103
Results Per Page
Sort Options
Item A Web-Based Geotechnical GIS(Wiley, 2011-10-05) Graettinger, Andrew J.; Ryals, Zachary T.; Smith, Randy K.A web-based Geotechnical Geographic Information System (GeoGIS) was developed and tested for the Alabama Departmentof Transportation. This web-based system stores geotechnical information about transportation projects, such as subsurface data,construction drawings, and design information. Typically, this information is in a report or plan sheet format, but raw geotechnicaldata can also be accommodated in the GeoGIS. The goal of this system is to provide easy access and storage for all geotechnicaland subsurface structural information from across a state. Access through a secure web interface allows for keyword searches andinteractive map selection. The web-based GeoGIS has four geotechnical layers (project, bridge, foundation, and soil boring) thatcan be displayed on a road map, aerial photos, or USGS 7.5 minute quadrangles. For testing purposes the GeoGIS was populatedwith multiple document types, formats, and sizes. In all cases, the system performed above expectations.Item Accountability in smart grid and medical sensor network(University of Alabama Libraries, 2013) Liu, Jing; Xiao, Yang; University of Alabama TuscaloosaAlthough advanced cyber security technology has protected every level of current network infrastructure, vulnerabilities continue to emerge after new functions are added. As a complement, accountability is required to further secure the network in terms of privacy, integrity, and confidentiality. Even if a security issue presents itself, the built-in accountability mechanism will find out who is responsible for it. This dissertation mainly studies existing technologies of accountability and tries to address several important cyber security issues using these techniques. One specific problem has been raised in smart grids. As we know, power utility company charges customers solely based on readings from their power meters. Considering operating cost, the utility just measures aggregated power supply to a service area. Once a meter is compromised by cyber attacks, the utility can hardly find it out and thus may have economic loss. To make the smart grid more reliable, we proposed accountable metering systems in both home area and neighborhood area networks. Analysis and simulation results show that abnormal meters could be effectively identified under certain reasonable assumptions. Another case is the medical sensor network (MSN). In this context, patients are deployed with medical sensors and wearable devices and are remotely monitored by professionals. Since it is an economical way to reduce healthcare costs and save medical resources, we expect a robust, reliable, and scalable MSN in the near future. However, the time signal and temporal history in current MSN are vulnerable due to unsecured infrastructure and transmission strategies. Meanwhile, the MSN may leak patients' identifications or other sensitive information that violates personal privacy. To make sure the correctness of critical time signal, we presented two temporal accountability schemes for the MSN. In the meantime, these schemes also provide privacy-preserving ability.Item Accountable logging and its applications to intrusion detection(University of Alabama Libraries, 2013) Fu, Bo; Xiao, Yang; University of Alabama TuscaloosaToday's computer and network systems were not originally designed for accountability which plays a crucial role in information assurance systems. To assure accountability, each entity in the system should be held responsible for its own behaviors so that the entity is a part of larger chains of the system's accountability. To achieve accountability, a flow-net methodology that records events as well as relations between events was proposed. The multi-layer feature of computer and network systems brings us the chance to achieve multiple degrees of accountability, which means we are able to acknowledge the system's behaviors at different levels of accountability. In this dissertation, a multi-resolution flow-net is proposed for achieving multi-layer accountability. Moreover, Intrusion Detection Systems that monitor malicious behaviors in computer and network systems play an important role in assuring system security. Flow-net that builds comprehensive logs and helps track events is able to order to record system and user behaviors. In this dissertation, an Intrusion Detection Scheme by Flow-Net Based Fingerprint (IDS-FF) scheme is proposed for detecting fingerprints of malicious behaviors. As an application of the IDS-FF scheme, we use it to detect intrusions in TCP/IP networks. Furthermore, in order to detect the intrusions that disguise themselves as regular behaviors in networks, we apply the IDS-FF scheme with cryptography techniques in TCP/IP networks.Item Algorithms on the GPU(University of Alabama Libraries, 2017) Robinson, Jeffrey A.; Vrbsky, Susan V.; Hong, Xiaoyan; University of Alabama TuscaloosaGeneral Purpose Programming using Graphical Processing Units (GPGPU) is a fast growing subfield in High Performance Computing (HPC). These devices provide a very high throughput with low cost to many parallel problems, with performance increasing every year while costs remain stable or in some cases even decrease. Many modern supercomputing clusters include these devices for use by scientists and engineers. In this dissertation we analyze three different algorithms on the GPGPU from the domains of large integer modular arithmetic, optimization graph problems, and ranking using machine learning, in order to study and propose new strategies to improve the performance of these algorithms. To solve the large integer modular arithmetic problem we implement a GPU-based version of the Montgomery multiplication algorithm, and in our implementation we incorporate optimizations that result in notable performance improvements compared to existing GPU implementations. In the optimization graph problem domain we present a Traveling Salesman Problem (TSP) two-opt approximation algorithm with a modification called k-swap, and with our proposed k-swap modification to the GPU implementation, we obtain a speed-up over the existing algorithm of 4.5x to 22.9x on datasets ranging from 1400 to 33810 nodes, respectively. Lastly, for ranking using machine learning, a new strategy for learning to rank is designed and studied, which combines the two machine learning approaches of clustering and ranking. Results demonstrate an improved ranking of documents for web based queries.Item Algorithms with applications in robotics(University of Alabama Libraries, 2009) Munteanu, Bogdan; Borie, Richard B.; University of Alabama TuscaloosaMany real world applications which involve computational steps are closely tied to theoretical computer science. In order for these systems to be efficiently deployed and used, a thorough analysis is required in advance. This dissertation deals with several real world problems related to the field of Robotics, which can be mathematically modeled and analyzed. One of these problems is known as the pursuit evasion problem and involves the use of independent automated robots to capture a fugitive hiding in a building or a cave system. This is an extensively studied game theory and combinatorics problem which has multiple variations. It can be modeled as a graph and the goal is to minimize the cost of capturing the evader. We deal with two completely different variations of this problem: a vision based variant, in which the robots have limited vision and thus can react when the fugitive is in line of sight; and a no-vision variant, in which the robots do not have any knowledge about the fugitive. Another problem we deal with is the problem of neighbor discovery in wireless networks using directional antennas. This is another problem which received a growing interest in the last years. Our approach to solving this problem, as well as the model, is different from the other results that have been previously published in the literature. Besides modeling and formally analyzing these problems, our focus in this dissertation is to design efficient algorithms that solve them either completely or partially.Item Application of human error theories in detecting and preventing software requirement errors(University of Alabama Libraries, 2017) Hu, Wenhua; Carver, Jeffrey C.; University of Alabama TuscaloosaDeveloping correct software requirements is important for overall software quality. Most existing quality improvement approaches focus on detection and removal of faults (i.e., problems recorded in a document) as opposed to identifying the underlying errors that produced those faults. Accordingly, developers are likely to make the same errors in the future and not recognize other existing faults with the same origins. The Requirement Error Taxonomy (RET) developed by Walia and Carver helps focus the developer’s attention on common errors that can occur during requirements engineering. However, because development of software requirements is a human-centric process, requirements engineers will likely make human errors during the process which may lead to undetected faults. Thus, in order to bridge the gap, the goals of my dissertation are: (1) construct a complete Human Error Taxonomy (HET) for the software requirements stage; (2) investigate the usefulness of HET as a defect detection technique; (3) investigate the effectiveness of HET as a defect prevention technique; and (4) provide specific defect prevention measurements for each error in HET. To address these goals, the dissertation contains three articles. The first article is a systematic literature review that uses insights from cognitive psychology research on human errors to develop formal HET to help software engineers improve software requirements specification (SRS) documents. After building the HET, it is necessary to empirically evaluate its effectiveness. Thus, the second article describes two studies to evaluate the usefulness of the HET in the process of defect detection. Finally, the third article analyzes the usefulness of HET for defect prevention and provides strategies for preventing specific errors in the SRS.Item Automatic Count of Bites and Chews From Videos of Eating Episodes(IEEE, 2020) Hossain, Delwar; Ghosh, Tonmoy; Sazonov, Edward; University of Alabama TuscaloosaMethods for measuring of eating behavior (known as meal microstructure) often rely on manual annotation of bites, chews, and swallows on meal videos or wearable sensor signals. The manual annotation may be time consuming and erroneous, while wearable sensors may not capture every aspect of eating (e.g. chews only). The aim of this study is to develop a method to detect and count bites and chews automatically from meal videos. The method was developed on a dataset of 28 volunteers consuming unrestricted meals in the laboratory under video observation. First, the faces in the video (regions of interest, ROI) were detected using Faster R-CNN. Second, a pre-trained AlexNet was trained on the detected faces to classify images as a bite/no bite image. Third, the affine optical flow was applied in consecutively detected faces to find the rotational movement of the pixels in the ROIs. The number of chews in a meal video was counted by converting the 2-D images to a 1-D optical flow parameter and finding peaks. The developed bite and chew count algorithm was applied to 84 meal videos collected from 28 volunteers. A mean accuracy (+/- STD) of 85.4% (+/- 6.3%) with respect to manual annotation was obtained for the number of bites and 88.9% (+/- 7.4%) for the number of chews. The proposed method for an automatic bite and chew counting shows promising results that can be used as an alternative solution to manual annotation.Item Automatic Ingestion Monitor Version 2 - A Novel Wearable Device for Automatic Food Intake Detection and Passive Capture of Food Images(IEEE, 2021) Doulah, Abul; Ghosh, Tonmoy; Hossain, Delwar; Imtiaz, Masudul H.; Sazonov, Edward; University of Alabama TuscaloosaUse of food image capture and/or wearable sensors for dietary assessment has grown in popularity. Active - methods rely on the user to take an image of each eating episode. "Passive" methods use wearable cameras that continuously capture images. Most of "passively" captured images are not related to food consumption and may present privacy concerns. In this paper, we propose a novel wearable sensor (Automatic Ingestion Monitor. AIM-2) designed to capture images only during automatically detected eating episodes. The capture method was validated on a dataset collected from 30 volunteers in the community wearing the AIM-2 for 24h in pseudo-free-living and 24h in a free-living environment. The AIM-2 was able to detect food intake over 10-second epochs with a (mean and standard deviation) Fl-score of 81.8 +/- 10.1%. The accuracy of eating episode detection was 82.7%. Out of a total of 180,570 images captured, 8,929 (4.9%) images belonged to detected eating episodes. Privacy concerns were assessed by a questionnaire on a scale 1-7. Continuous capture had concern value of 5.0 +/- 1.6 (concerned) while image capture only during food intake had concern value of 1.9 +/- 1.7 (not concerned). Results suggest that AIM-2 can provide accurate detection of food intake, reduce the number of images for analysis and alleviate the privacy concerns of the users.Item Bayesian analysis of the impact of rainfall data product on simulated slope failure for North Carolina locations(Springer, 2019) Yatheendradas, Soni; Kirschbaum, Dalia; Nearing, Grey; Vrugt, Jasper A.; Baum, Rex L.; Wooten, Rick; Lu, Ning; Godt, Jonathan W.; University of Maryland College Park; National Aeronautics & Space Administration (NASA); NASA Goddard Space Flight Center; University of Alabama Tuscaloosa; University of California Irvine; United States Department of the Interior; United States Geological SurveyIn the past decades, many different approaches have been developed in the literature to quantify the load-carrying capacity and geotechnical stability (or the factor of safety, F-s) of variably saturated hillslopes. Much of this work has focused on a deterministic characterization of hillslope stability. Yet, simulated F-s values are subject to considerable uncertainty due to our inability to characterize accurately the soil mantle's properties (hydraulic, geotechnical, and geomorphologic) and spatiotemporal variability of the moisture content of the hillslope interior. This is particularly true at larger spatial scales. Thus, uncertainty-incorporating analyses of physically based models of rain-induced landslides are rare in the literature. Such landslide modeling is typically conducted at the hillslope scale using gauge-based rainfall forcing data with rather poor spatiotemporal coverage. For regional landslide modeling, the specific advantages and/or disadvantages of gauge-only, radar-merged and satellite-based rainfall products are not clearly established. Here, we compare and evaluate the performance of the Transient Rainfall Infiltration and Grid-based Regional Slope-stability analysis (TRIGRS) model for three different rainfall products using 112 observed landslides in the period between 2004 and 2011 from the North Carolina Geological Survey database. Our study includes the Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis Version 7 (TMPA V7), the North American Land Data Assimilation System Phase 2 (NLDAS-2) analysis, and the reference truth Stage IV precipitation. TRIGRS model performance was rather inferior with the use of literature values of the geotechnical parameters and soil hydraulic properties from ROSETTA using soil textural and bulk density data from SSURGO (Soil Survey Geographic database). The performance of TRIGRS improved considerably after Bayesian estimation of the parameters with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm using Stage IV precipitation data. Hereto, we use a likelihood function that combines binary slope failure information from landslide event and null periods using multivariate frequency distribution-based metrics such as the false discovery and false omission rates. Our results demonstrate that the Stage IV-inferred TRIGRS parameter distributions generalize well to TMPA and NLDAS-2 precipitation data, particularly at sites with considerably larger TMPA and NLDAS-2 rainfall amounts during landslide events than null periods. TRIGRS model performance is then rather similar for all three rainfall products. At higher elevations, however, the TMPA and NLDAS-2 precipitation volumes are insufficient and their performance with the Stage IV-derived parameter distributions indicates their inability to accurately characterize hillslope stability.Item Botnet: Classification, Attacks, Detection, Tracing, and Preventive Measures(Springer, 2009) Liu, Jing; Xiao, Yang; Ghaboosi, Kaveh; Deng, Hongmei; Zhang, Jingyuan; University of Alabama Tuscaloosa; University of Oulu; Intelligent Automation Inc.Botnets become widespread in wired and wireless networks, whereas the relevant research is still in the initial stage. In this paper, a survey of botnets is provided. We first discuss fundamental concepts of botnets, including formation and exploitation, lifecycle, and two major kinds of topologies. Several related attacks, detection, tracing, and countermeasures, are then introduced, followed by recent research work and possible future challenges. Copyright (C) 2009 Jing Liu et al.Item Change Analysis Across Version Histories of Systems Models(University of Alabama Libraries, 2021) Popoola, Saheed; Gray, Jeff; University of Alabama TuscaloosaModel-Based Systems Engineering (MBSE) elevates models as first-class artifacts throughout the development process of a system’s lifecycle. This makes it easier to develop standard tools for automated analysis and overall management of a system process; thereby, saving cost and minimizing errors. Like all systems artifacts, models are subject to continuous change and the execution of changes may significantly affect model maintenance. Existing work has already investigated processes and techniques to support, analyze and mitigate the impact of changes to models. However, most of these works often focus on the analysis of changes between two sets of models and do not take a holistic approach to the entire version history of models. To support change analysis across the entire version history, we developed a Change Analyzer that can be used to query and extract change information across successive versions of a model. We then used the Change Analyzer to mine several versions of Simulink models, computed the differences across the versions, and classified the computed differences into appropriate maintenance categories in order to generate information related to understanding the rationale of the design decisions that necessitated the observed changes. To study the impact of changes on the models, we used the Change Analyzer to analyze the evolution of seven bad smells in 81 LabVIEW models across 10 open-source repositories, and four bad smells in 575 Simulink models across 31 open-source repositories. The evaluation of the Change Analyzer indicates that it can be used to construct concise queries that execute faster than a generic model-based query engine. The results of the change analysis process also show a high similarity of the recovered design decisions with the manually identified decisions, even though the manual identification process takes much more time and often does not provide additional information about the changes executed to implement the design decisions. Furthermore, we discovered that adaptive maintenance tasks often lead to an increase in the number of smells in systems models, but corrective maintenance tasks often correlate with a decrease in the number of smells.Item A cloud architecture for reducing costs in local parallel and distributed virtualized cloud environments(University of Alabama Libraries, 2013) Galloway, Jeffrey Michael; Vrbsky, Susan V.; University of Alabama TuscaloosaDeploying local cloud architectures can be beneficial to organizations that wish to maximize their available computational and storage resources. Many users are reluctant to move their computational and storage needs to a public cloud vendor. While designing scalable local cloud architectures, power requirements should be given adamant attention. This dissertation focuses on several challenging concerns relating to cloud computing architectures, specifically lowering the power requirements of Infrastructure-as-a-Service (IaaS) local cloud architectures. These challenges include power efficient computational resource load consolidating, power efficient persistent cloud storage consolidating, and deploying a local IaaS cloud architecture with limited networking resources. The design of a load consolidation approach to Infrastructure-as-a-Service cloud architectures that is power efficient is presented in this dissertation. A proposed Power Aware Load Consolidation algorithm, PALC, maintains the state of all compute nodes, and based on utilization percentages, decides the number of compute nodes that should be operating. Results show that PALC provides adequate availability to compute node resources while decreasing the overall power consumed by the local cloud architecture. Persistent storage is a necessity in cloud computing architectures. Since the goal of this local cloud architecture design is to deploy resources using minimum power consumption, a power aware persistent storage consolidation algorithm is presented in this dissertation. The Power Aware Storage Consolidation algorithm, PASC, dynamically determines the number or active persistent storage nodes based on the number of active users. This algorithm, combined with the PALC algorithm will significantly decrease the power consumed by the local cloud architecture. Realizing the implications of deploying a local cloud system in an environment with limited networking resources (IP addresses), a solution is needed to allow users to connect with only one public IP address. Users will be able to access cloud resources through a simple web interface and maintenance of the cloud will be contained with private networking resources. Also introduced is the ability to scale to have multiple geographically distributed clusters in the local cloud using only one IP address per cluster. This dissertation provides a comprehensive solution for deploying a local cloud architecture that is cost efficient to maintain.Item Coalition formation, surveillance, and network security: modeling, design, and evaluation(University of Alabama Libraries, 2014) Liang, Xiannuan; Xiao, Yang; University of Alabama TuscaloosaMobile sensors have been widely applied in sensing applications. The performance of mobile sensors in intrusion detection has drawn a lot of attention from researchers. In this dissertation, we model the behaviors of mobile sensors in a few scenarios of applications and study their performances. In addition to the performance, the control schemes of the mobile sensors are a hot research topic, especially, when cooperation between mobile sensors is concerned. Basically, the control schemes can be global ones in which the mobile sensors are configured as a whole or distributed ones in which the moving sensors themselves have some level of autonomy of their actions. In this dissertation, two techniques, the non-game-theoretic and the game-theoretic, will be used according to global control scheme and distributed control scheme, respectively. Regarding the distributed control scheme, we will explore the potential collaboration between the mobile sensors which we view as a coalition formation game. With the incentive of a mobile sensor to maximize its payoff and the tradeoff between the higher detecting ability gained by cooperation (in the form of forming coalition) and the cost to form the coalition, the game-theoretic technique is a tool for our study in the coalition formation game. Using game theory, we show how the behaviors/strategies of the mobile sensors in the coalition formation game affect their payoffs. Computer networks have been ubiquitous, while network security issues have been always changing as more sophisticated attack techniques and strategies have been coming out. Countermeasures must be developed and applied to guarantee desired levels of security. Among those countermeasures, cryptography is widely applied in network communications to provide confidentiality, authenticity, and integrity. Without reasonable configuration, the usage of encryption and decryption could introduce loss of the performance in network transmission traffic. This dissertation studies how the performance of network transmissions in a channel using Sliding-Window flow control is affected by the configuration of the studied cypher scheme, the Cipher Feedback mode, and explores the optimal configuration of the scheme without reducing the security provided by the Cipher Feedback mode.Item Code refactoring under constraints(University of Alabama Libraries, 2011) Liang, Yan; Kraft, Nicholas A.; Smith, Randy K.; University of Alabama TuscaloosaCode refactoring is the process of changing the internal structure of the program without changing its external behaviors. Most refactoring tools ensure behavior preservation by enforcing preconditions that must hold for the refactoring to be valid. However, their approaches have three drawbacks that make the refactoring results far from satisfactory and reduce the utilization of refactoring tools in practice. Firstly, programmers are not sure how code will be changed by those tools due to the invisible refactoring rules hidden behind the interfaces of tools. Secondly, current refactoring tools have limited extensibility to accommodate new refactorings. Lastly, most refactoring tools lack mechanisms to allow programmer to specify their own preconditions to indicate as to which properties of a program are of interest. We consider refactoring a code change activity that, as with other constraints imposed on code during software development and maintenance such as naming rules, should be visible, easily extensible, and adaptable. It should also combine the developers' opinions, implementation styles of existing code and other good coding practice. We propose a model-based approach to precondition specification and checking in which preconditions can be declared explicitly and dynamically against the designated program metamodel, and verified against concrete program models. This dissertation applies the approach of model-based refactoring precondition specification and checking on C++ source code refactoring. Based on the analysis of primitive refactorings, we design a C++ language metamodel to support constraint specification and code inspection for refactoring purposes. We then specify preconditions of 18 primitive refactorings against the metamodel, with primary concerns on syntax error prevention and semantic preservation. The impact of a programmer's perspective on these specifications is discussed. As another example to demonstrate the importance and necessities of supporting visible, extensible and adaptable precondition specification and checking, we use template method and singleton patterns to discuss how design patterns can affect refactoring decisions. We set up an experimental environment in which we build the language metamodel, develop a program model extraction tool and simulate the process of precondition specification and verification following the proposed approach.Item Combining information retrieval modules and structural information for source code bug localization and feature location(University of Alabama Libraries, 2011) Shao, Peng; Smith, Randy K.; Kraft, Nicholas A.; University of Alabama TuscaloosaBug localization and feature location in source code are software evolution tasks in which developers use information about a bug or feature present in a software system to locate the source code elements, such as classes or methods. These classes or methods must be modified either to correct the bug or implement a feature. Automating bug localization and feature location are necessary due to the size and complexity of modern software systems. Recently, researchers have developed static bug localization and feature location techniques using information retrieval techniques, such as latent semantic indexing (LSI), to model lexical information, such as identifiers and comments, from source code. This research presents a new technique, LSICG, which combines LSI modeling lexical information and call graphs to modeling structural information. The output is a list of methods ranked in descending order by likelihood of requiring modification to correct the bug or implement the feature under consideration. Three case studies including comparison of LSI and LSICG at method level and class level of granularity on 25 features in JavaHMO, 35 bugs in Rhino, 3 features and 6 bugs in jEdit demonstrate that The LSICG technique provides improved performance compared to LSI alone.Item Communication in disruption tolerant networks: models, analyses and routing(University of Alabama Libraries, 2011) Gu, Bo; Hong, Xiaoyan; University of Alabama TuscaloosaMany scenarios for mobile and wireless networks demonstrate disruptions in communications where connections may not be available from time to time, examples include wireless sensor networks, tactical mobile ad hoc networks, planetary networks and vehicular networks. The intermittent connection could be a result of the mobility of wireless nodes, the limited transmission range, communication jamming or the low nodal density. To deal with the problems, Disruption Tolerant Networking (DTN) has been proposed to handle the disconnection based on a store-carry-forward paradigm. Among the approaches for reducing the communication latency in DTN, introducing the relay nodes called throw-box has been proved to be an effective one. However few studies have provided sufficient analysis and routing solutions for throw-box based network paradigm. This dissertation addresses several challenging issues relating to wireless networks, and specifically, DTN. Firstly, we study the issue of connectivity by focusing on the transition phase of wireless network from a state of partition to a state of connection according to the growth of node density. A percolation theory based model is proposed to derive the lower bound and the upper bound of critical density and further find the critical time points that mark the network transformation from partition to connected state. The second work is to analyze the latency of message dissemination in the throw-box assisted DTNs. In this network architecture, static wireless devices called throw-boxes are deployed to increase message delivery probability and to reduce transmission latency. The research works include modeling the message delivering process among throw-boxes and modeling the latency distribution for message collection. Finally, we propose efficient routing strategies for the throw-box assisted DTNs. In such a network, the mobile nodes traveling between the throw-boxes form time-dependent network links which carry the temporally stored messages from one box to another. Our protocol is designed to consider jointly the capacity of mobile nodes and the time-dependent delay. A Markov model is proposed to describe the evolution of the real-time link, and to help derive the forwarding decision and routing policy. Our trace based simulation validates the advantages of the proposed routing strategy.Item Contemporary peer code review practices and associated benefits(University of Alabama Libraries, 2015) Bosu, Amiangshu Shekhar; Carver, Jeffrey C.; University of Alabama TuscaloosaPrior research indicates that peer code review is an effective method for reducing the number of defects and improving the quality of code. Besides maintaining the integrity of the code, code review spreads knowledge, expertise, and development techniques among the review participants. In the recent years, many Open Source Software (OSS) communities and commercial organizations have adopted 'contemporary' or 'modern' code review, an informal, regular, tool-based process. Because both OSS and commercial developers spend a significant amount of effort performing code reviews, the primary goal of this dissertation is to better understand contemporary code review, its non-technical benefits, factors that influence its outcomes, and its impact on peer impression formation. To address this goal, this dissertation describes empirical studies using surveys, software repository mining, and social network analysis. The first study is a survey of OSS developers to understand their collaboration and the process by which they form impressions of each other. The results suggest that coding-related factors influence impression formation the most among OSS developers. Therefore, the types of interactions where participants can judge a peer's code or creativity (e.g., code review) should be crucial for peer impression formation. The results of this study motivated the selection of peer code review as the focus of this dissertation. The second study describes a survey of developers from 36 popular OSS projects and from Microsoft about: 1) the code review process in their projects, 2) their expectations from code review, and 3) how code review impacts impressions about their peers. The results suggest that the primary perceived benefit of code review is knowledge sharing, relationship building, better designs, and ensuring maintainable code, as opposed to the expected result of defect detection. Code reviews help build impressions between code review participants. Those impressions not only impact the code review process but also future collaborations among developers. Due to the rarity of face-to-face interactions, OSS developers rely more on the reputation of and relationship with the author during code reviews. Conversely, Microsoft developers focus more on expertise and anticipated efforts. Finally, the third study aims to find the impact of developers' reputation on the outcome of his/her code review requests. The results suggest that developers' reputations help them receive quicker feedback on their review requests, complete reviews in shorter time, and get their code changes accepted. Newcomers to OSS projects suffer the most due to delayed feedback, which may discourage their future participation. A reviewer recommendation system to triage incoming code review requests can be useful to reduce delayed feedback for newcomers. Based on the results from these studies, this dissertation makes recommendations for practitioners to adopt and improve code review practices.Item Continuous drug infusion for diabetes therapy: A closed-loop control system design(Springer, 2007) Chen, Jiming; Cao, Kejie; Sun, Youxian; Xiao, Yang; Su, Xu (Kevin); Zhejiang University; University of Alabama Tuscaloosa; University of Texas System; University of Texas at San Antonio (UTSA)While a typical way for diabetes therapy is discrete insulin infusion based on long-time interval measurement, in this paper, we design a closed-loop control system for continuous drug infusion to improve the traditional discrete methods and make diabetes therapy automatic in practice. By exploring the accumulative function of drug to insulin, a continuous injection model is proposed. Based on this model, proportional-integral-derivative (PID) and fuzzy logic controllers are designed to tackle a control problem of the resulting highly nonlinear plant. Even with serious disturbance of glucose, such as nutrition absorption at meal time, the proposed scheme can perform well in simulation experiments. Copyright (c) 2008.Jiming Chen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Item Cooperation paradigms for overcoming communication limitations in multirobot wide area coverage(University of Alabama Libraries, 2011) Wellman, Briana Lowe; Anderson, Monica D.; University of Alabama TuscaloosaMulti-robot systems are an important research topic in wide area coverage applications such as hazardous waste clean-up, bomb detection, surveillance, and search and rescue missions. They can work in parallel and complete tasks faster than a single robot. Communications can support cooperation to speed up execution, reduce duplication, and prevent interference. Communication among team members is achieved explicitly or implicitly. In explicit communication, messages are intentionally transmitted and received from robot to robot. In implicit communication, robots observe the environment and other robot actions. Although many systems use explicit communications, in exploration of large, open areas (e.g. stadiums and parks), persistent intra-team digital communications is not guaranteed. Therefore, alternative approaches that do not rely upon message passing throughout exploration are needed. Novel contributions of overcoming communication limitations in wide area coverage include: (1) insight on how information shared between robots that are close has more influence on immediate action selection than information shared between robots that are farther apart. Spatial and temporal locality can be instrumental in determining relevance in subsequent action selection; (2) an approach in which observation leverages spatial and temporal locality to infer state rather than rely on digital messaging; and (3) an approach in which robots use spatial rendezvous to exchange information instead of continuously passing messages. Robots explore an environment in sectors, or designated areas, and periodically meet to communicate map information of what they explored. Simulations and physical experiments were conducted and results suggest both approaches can serve as alternatives to cooperation based on continuous point-to-point communications.Item Creation of crash-countermeasure police patrol routes targeting hotspot road segments(University of Alabama Libraries, 2010) Steil, Dana Andrew; Parrish, Allen Scott; University of Alabama TuscaloosaThis dissertation addresses the problem of expressing, executing, evaluating, and engaging patrol routing algorithms that target event hotspots on roadways. An "event hotspot" is a location that is over-represented in some event occurrence, such as crashes, citations or any other event of interest. Recommended patrol routes can be used by organizations such as police agencies, emergency medical responders, and taxi services who patrol roadway segments at proper times to assist or deter their target events. Patrol routing algorithms are used to specify the movements of simulated mobile agents on a roadway system. The patrol algorithms are first expressed using TURN, Technique for Ultimate Route Navigation, our extensible domain specific language (DSL) created for this purpose. Algorithms specified using TURN syntax are then executed and evaluated in a custom simulation environment. Patrol routing algorithms deemed _t for a specific context are then engaged by users via a web-based geographic information systems (GIS) portal. In this dissertation details of the patrol routing model are followed by two case studies. The first case study evaluates agent response times to events when dispatched from region-based staging points. The second case study evaluates several nondeterministic highway patrol routing algorithms according to four metrics: response times, network coverage, hotspot coverage, and hotspot exposure. The case study results demonstrate the applicability of the patrol routing system.