Theses and Dissertations - Department of Computer Science
Permanent URI for this collection
Browse
Browsing Theses and Dissertations - Department of Computer Science by Title
Now showing 1 - 20 of 73
Results Per Page
Sort Options
Item Accountability in smart grid and medical sensor network(University of Alabama Libraries, 2013) Liu, Jing; Xiao, Yang; University of Alabama TuscaloosaAlthough advanced cyber security technology has protected every level of current network infrastructure, vulnerabilities continue to emerge after new functions are added. As a complement, accountability is required to further secure the network in terms of privacy, integrity, and confidentiality. Even if a security issue presents itself, the built-in accountability mechanism will find out who is responsible for it. This dissertation mainly studies existing technologies of accountability and tries to address several important cyber security issues using these techniques. One specific problem has been raised in smart grids. As we know, power utility company charges customers solely based on readings from their power meters. Considering operating cost, the utility just measures aggregated power supply to a service area. Once a meter is compromised by cyber attacks, the utility can hardly find it out and thus may have economic loss. To make the smart grid more reliable, we proposed accountable metering systems in both home area and neighborhood area networks. Analysis and simulation results show that abnormal meters could be effectively identified under certain reasonable assumptions. Another case is the medical sensor network (MSN). In this context, patients are deployed with medical sensors and wearable devices and are remotely monitored by professionals. Since it is an economical way to reduce healthcare costs and save medical resources, we expect a robust, reliable, and scalable MSN in the near future. However, the time signal and temporal history in current MSN are vulnerable due to unsecured infrastructure and transmission strategies. Meanwhile, the MSN may leak patients' identifications or other sensitive information that violates personal privacy. To make sure the correctness of critical time signal, we presented two temporal accountability schemes for the MSN. In the meantime, these schemes also provide privacy-preserving ability.Item Accountable logging and its applications to intrusion detection(University of Alabama Libraries, 2013) Fu, Bo; Xiao, Yang; University of Alabama TuscaloosaToday's computer and network systems were not originally designed for accountability which plays a crucial role in information assurance systems. To assure accountability, each entity in the system should be held responsible for its own behaviors so that the entity is a part of larger chains of the system's accountability. To achieve accountability, a flow-net methodology that records events as well as relations between events was proposed. The multi-layer feature of computer and network systems brings us the chance to achieve multiple degrees of accountability, which means we are able to acknowledge the system's behaviors at different levels of accountability. In this dissertation, a multi-resolution flow-net is proposed for achieving multi-layer accountability. Moreover, Intrusion Detection Systems that monitor malicious behaviors in computer and network systems play an important role in assuring system security. Flow-net that builds comprehensive logs and helps track events is able to order to record system and user behaviors. In this dissertation, an Intrusion Detection Scheme by Flow-Net Based Fingerprint (IDS-FF) scheme is proposed for detecting fingerprints of malicious behaviors. As an application of the IDS-FF scheme, we use it to detect intrusions in TCP/IP networks. Furthermore, in order to detect the intrusions that disguise themselves as regular behaviors in networks, we apply the IDS-FF scheme with cryptography techniques in TCP/IP networks.Item Algorithms on the GPU(University of Alabama Libraries, 2017) Robinson, Jeffrey A.; Vrbsky, Susan V.; Hong, Xiaoyan; University of Alabama TuscaloosaGeneral Purpose Programming using Graphical Processing Units (GPGPU) is a fast growing subfield in High Performance Computing (HPC). These devices provide a very high throughput with low cost to many parallel problems, with performance increasing every year while costs remain stable or in some cases even decrease. Many modern supercomputing clusters include these devices for use by scientists and engineers. In this dissertation we analyze three different algorithms on the GPGPU from the domains of large integer modular arithmetic, optimization graph problems, and ranking using machine learning, in order to study and propose new strategies to improve the performance of these algorithms. To solve the large integer modular arithmetic problem we implement a GPU-based version of the Montgomery multiplication algorithm, and in our implementation we incorporate optimizations that result in notable performance improvements compared to existing GPU implementations. In the optimization graph problem domain we present a Traveling Salesman Problem (TSP) two-opt approximation algorithm with a modification called k-swap, and with our proposed k-swap modification to the GPU implementation, we obtain a speed-up over the existing algorithm of 4.5x to 22.9x on datasets ranging from 1400 to 33810 nodes, respectively. Lastly, for ranking using machine learning, a new strategy for learning to rank is designed and studied, which combines the two machine learning approaches of clustering and ranking. Results demonstrate an improved ranking of documents for web based queries.Item Algorithms with applications in robotics(University of Alabama Libraries, 2009) Munteanu, Bogdan; Borie, Richard B.; University of Alabama TuscaloosaMany real world applications which involve computational steps are closely tied to theoretical computer science. In order for these systems to be efficiently deployed and used, a thorough analysis is required in advance. This dissertation deals with several real world problems related to the field of Robotics, which can be mathematically modeled and analyzed. One of these problems is known as the pursuit evasion problem and involves the use of independent automated robots to capture a fugitive hiding in a building or a cave system. This is an extensively studied game theory and combinatorics problem which has multiple variations. It can be modeled as a graph and the goal is to minimize the cost of capturing the evader. We deal with two completely different variations of this problem: a vision based variant, in which the robots have limited vision and thus can react when the fugitive is in line of sight; and a no-vision variant, in which the robots do not have any knowledge about the fugitive. Another problem we deal with is the problem of neighbor discovery in wireless networks using directional antennas. This is another problem which received a growing interest in the last years. Our approach to solving this problem, as well as the model, is different from the other results that have been previously published in the literature. Besides modeling and formally analyzing these problems, our focus in this dissertation is to design efficient algorithms that solve them either completely or partially.Item Application of human error theories in detecting and preventing software requirement errors(University of Alabama Libraries, 2017) Hu, Wenhua; Carver, Jeffrey C.; University of Alabama TuscaloosaDeveloping correct software requirements is important for overall software quality. Most existing quality improvement approaches focus on detection and removal of faults (i.e., problems recorded in a document) as opposed to identifying the underlying errors that produced those faults. Accordingly, developers are likely to make the same errors in the future and not recognize other existing faults with the same origins. The Requirement Error Taxonomy (RET) developed by Walia and Carver helps focus the developer’s attention on common errors that can occur during requirements engineering. However, because development of software requirements is a human-centric process, requirements engineers will likely make human errors during the process which may lead to undetected faults. Thus, in order to bridge the gap, the goals of my dissertation are: (1) construct a complete Human Error Taxonomy (HET) for the software requirements stage; (2) investigate the usefulness of HET as a defect detection technique; (3) investigate the effectiveness of HET as a defect prevention technique; and (4) provide specific defect prevention measurements for each error in HET. To address these goals, the dissertation contains three articles. The first article is a systematic literature review that uses insights from cognitive psychology research on human errors to develop formal HET to help software engineers improve software requirements specification (SRS) documents. After building the HET, it is necessary to empirically evaluate its effectiveness. Thus, the second article describes two studies to evaluate the usefulness of the HET in the process of defect detection. Finally, the third article analyzes the usefulness of HET for defect prevention and provides strategies for preventing specific errors in the SRS.Item Change Analysis Across Version Histories of Systems Models(University of Alabama Libraries, 2021) Popoola, Saheed; Gray, Jeff; University of Alabama TuscaloosaModel-Based Systems Engineering (MBSE) elevates models as first-class artifacts throughout the development process of a system’s lifecycle. This makes it easier to develop standard tools for automated analysis and overall management of a system process; thereby, saving cost and minimizing errors. Like all systems artifacts, models are subject to continuous change and the execution of changes may significantly affect model maintenance. Existing work has already investigated processes and techniques to support, analyze and mitigate the impact of changes to models. However, most of these works often focus on the analysis of changes between two sets of models and do not take a holistic approach to the entire version history of models. To support change analysis across the entire version history, we developed a Change Analyzer that can be used to query and extract change information across successive versions of a model. We then used the Change Analyzer to mine several versions of Simulink models, computed the differences across the versions, and classified the computed differences into appropriate maintenance categories in order to generate information related to understanding the rationale of the design decisions that necessitated the observed changes. To study the impact of changes on the models, we used the Change Analyzer to analyze the evolution of seven bad smells in 81 LabVIEW models across 10 open-source repositories, and four bad smells in 575 Simulink models across 31 open-source repositories. The evaluation of the Change Analyzer indicates that it can be used to construct concise queries that execute faster than a generic model-based query engine. The results of the change analysis process also show a high similarity of the recovered design decisions with the manually identified decisions, even though the manual identification process takes much more time and often does not provide additional information about the changes executed to implement the design decisions. Furthermore, we discovered that adaptive maintenance tasks often lead to an increase in the number of smells in systems models, but corrective maintenance tasks often correlate with a decrease in the number of smells.Item A cloud architecture for reducing costs in local parallel and distributed virtualized cloud environments(University of Alabama Libraries, 2013) Galloway, Jeffrey Michael; Vrbsky, Susan V.; University of Alabama TuscaloosaDeploying local cloud architectures can be beneficial to organizations that wish to maximize their available computational and storage resources. Many users are reluctant to move their computational and storage needs to a public cloud vendor. While designing scalable local cloud architectures, power requirements should be given adamant attention. This dissertation focuses on several challenging concerns relating to cloud computing architectures, specifically lowering the power requirements of Infrastructure-as-a-Service (IaaS) local cloud architectures. These challenges include power efficient computational resource load consolidating, power efficient persistent cloud storage consolidating, and deploying a local IaaS cloud architecture with limited networking resources. The design of a load consolidation approach to Infrastructure-as-a-Service cloud architectures that is power efficient is presented in this dissertation. A proposed Power Aware Load Consolidation algorithm, PALC, maintains the state of all compute nodes, and based on utilization percentages, decides the number of compute nodes that should be operating. Results show that PALC provides adequate availability to compute node resources while decreasing the overall power consumed by the local cloud architecture. Persistent storage is a necessity in cloud computing architectures. Since the goal of this local cloud architecture design is to deploy resources using minimum power consumption, a power aware persistent storage consolidation algorithm is presented in this dissertation. The Power Aware Storage Consolidation algorithm, PASC, dynamically determines the number or active persistent storage nodes based on the number of active users. This algorithm, combined with the PALC algorithm will significantly decrease the power consumed by the local cloud architecture. Realizing the implications of deploying a local cloud system in an environment with limited networking resources (IP addresses), a solution is needed to allow users to connect with only one public IP address. Users will be able to access cloud resources through a simple web interface and maintenance of the cloud will be contained with private networking resources. Also introduced is the ability to scale to have multiple geographically distributed clusters in the local cloud using only one IP address per cluster. This dissertation provides a comprehensive solution for deploying a local cloud architecture that is cost efficient to maintain.Item Coalition formation, surveillance, and network security: modeling, design, and evaluation(University of Alabama Libraries, 2014) Liang, Xiannuan; Xiao, Yang; University of Alabama TuscaloosaMobile sensors have been widely applied in sensing applications. The performance of mobile sensors in intrusion detection has drawn a lot of attention from researchers. In this dissertation, we model the behaviors of mobile sensors in a few scenarios of applications and study their performances. In addition to the performance, the control schemes of the mobile sensors are a hot research topic, especially, when cooperation between mobile sensors is concerned. Basically, the control schemes can be global ones in which the mobile sensors are configured as a whole or distributed ones in which the moving sensors themselves have some level of autonomy of their actions. In this dissertation, two techniques, the non-game-theoretic and the game-theoretic, will be used according to global control scheme and distributed control scheme, respectively. Regarding the distributed control scheme, we will explore the potential collaboration between the mobile sensors which we view as a coalition formation game. With the incentive of a mobile sensor to maximize its payoff and the tradeoff between the higher detecting ability gained by cooperation (in the form of forming coalition) and the cost to form the coalition, the game-theoretic technique is a tool for our study in the coalition formation game. Using game theory, we show how the behaviors/strategies of the mobile sensors in the coalition formation game affect their payoffs. Computer networks have been ubiquitous, while network security issues have been always changing as more sophisticated attack techniques and strategies have been coming out. Countermeasures must be developed and applied to guarantee desired levels of security. Among those countermeasures, cryptography is widely applied in network communications to provide confidentiality, authenticity, and integrity. Without reasonable configuration, the usage of encryption and decryption could introduce loss of the performance in network transmission traffic. This dissertation studies how the performance of network transmissions in a channel using Sliding-Window flow control is affected by the configuration of the studied cypher scheme, the Cipher Feedback mode, and explores the optimal configuration of the scheme without reducing the security provided by the Cipher Feedback mode.Item Code refactoring under constraints(University of Alabama Libraries, 2011) Liang, Yan; Kraft, Nicholas A.; Smith, Randy K.; University of Alabama TuscaloosaCode refactoring is the process of changing the internal structure of the program without changing its external behaviors. Most refactoring tools ensure behavior preservation by enforcing preconditions that must hold for the refactoring to be valid. However, their approaches have three drawbacks that make the refactoring results far from satisfactory and reduce the utilization of refactoring tools in practice. Firstly, programmers are not sure how code will be changed by those tools due to the invisible refactoring rules hidden behind the interfaces of tools. Secondly, current refactoring tools have limited extensibility to accommodate new refactorings. Lastly, most refactoring tools lack mechanisms to allow programmer to specify their own preconditions to indicate as to which properties of a program are of interest. We consider refactoring a code change activity that, as with other constraints imposed on code during software development and maintenance such as naming rules, should be visible, easily extensible, and adaptable. It should also combine the developers' opinions, implementation styles of existing code and other good coding practice. We propose a model-based approach to precondition specification and checking in which preconditions can be declared explicitly and dynamically against the designated program metamodel, and verified against concrete program models. This dissertation applies the approach of model-based refactoring precondition specification and checking on C++ source code refactoring. Based on the analysis of primitive refactorings, we design a C++ language metamodel to support constraint specification and code inspection for refactoring purposes. We then specify preconditions of 18 primitive refactorings against the metamodel, with primary concerns on syntax error prevention and semantic preservation. The impact of a programmer's perspective on these specifications is discussed. As another example to demonstrate the importance and necessities of supporting visible, extensible and adaptable precondition specification and checking, we use template method and singleton patterns to discuss how design patterns can affect refactoring decisions. We set up an experimental environment in which we build the language metamodel, develop a program model extraction tool and simulate the process of precondition specification and verification following the proposed approach.Item Combining information retrieval modules and structural information for source code bug localization and feature location(University of Alabama Libraries, 2011) Shao, Peng; Smith, Randy K.; Kraft, Nicholas A.; University of Alabama TuscaloosaBug localization and feature location in source code are software evolution tasks in which developers use information about a bug or feature present in a software system to locate the source code elements, such as classes or methods. These classes or methods must be modified either to correct the bug or implement a feature. Automating bug localization and feature location are necessary due to the size and complexity of modern software systems. Recently, researchers have developed static bug localization and feature location techniques using information retrieval techniques, such as latent semantic indexing (LSI), to model lexical information, such as identifiers and comments, from source code. This research presents a new technique, LSICG, which combines LSI modeling lexical information and call graphs to modeling structural information. The output is a list of methods ranked in descending order by likelihood of requiring modification to correct the bug or implement the feature under consideration. Three case studies including comparison of LSI and LSICG at method level and class level of granularity on 25 features in JavaHMO, 35 bugs in Rhino, 3 features and 6 bugs in jEdit demonstrate that The LSICG technique provides improved performance compared to LSI alone.Item Communication in disruption tolerant networks: models, analyses and routing(University of Alabama Libraries, 2011) Gu, Bo; Hong, Xiaoyan; University of Alabama TuscaloosaMany scenarios for mobile and wireless networks demonstrate disruptions in communications where connections may not be available from time to time, examples include wireless sensor networks, tactical mobile ad hoc networks, planetary networks and vehicular networks. The intermittent connection could be a result of the mobility of wireless nodes, the limited transmission range, communication jamming or the low nodal density. To deal with the problems, Disruption Tolerant Networking (DTN) has been proposed to handle the disconnection based on a store-carry-forward paradigm. Among the approaches for reducing the communication latency in DTN, introducing the relay nodes called throw-box has been proved to be an effective one. However few studies have provided sufficient analysis and routing solutions for throw-box based network paradigm. This dissertation addresses several challenging issues relating to wireless networks, and specifically, DTN. Firstly, we study the issue of connectivity by focusing on the transition phase of wireless network from a state of partition to a state of connection according to the growth of node density. A percolation theory based model is proposed to derive the lower bound and the upper bound of critical density and further find the critical time points that mark the network transformation from partition to connected state. The second work is to analyze the latency of message dissemination in the throw-box assisted DTNs. In this network architecture, static wireless devices called throw-boxes are deployed to increase message delivery probability and to reduce transmission latency. The research works include modeling the message delivering process among throw-boxes and modeling the latency distribution for message collection. Finally, we propose efficient routing strategies for the throw-box assisted DTNs. In such a network, the mobile nodes traveling between the throw-boxes form time-dependent network links which carry the temporally stored messages from one box to another. Our protocol is designed to consider jointly the capacity of mobile nodes and the time-dependent delay. A Markov model is proposed to describe the evolution of the real-time link, and to help derive the forwarding decision and routing policy. Our trace based simulation validates the advantages of the proposed routing strategy.Item Contemporary peer code review practices and associated benefits(University of Alabama Libraries, 2015) Bosu, Amiangshu Shekhar; Carver, Jeffrey C.; University of Alabama TuscaloosaPrior research indicates that peer code review is an effective method for reducing the number of defects and improving the quality of code. Besides maintaining the integrity of the code, code review spreads knowledge, expertise, and development techniques among the review participants. In the recent years, many Open Source Software (OSS) communities and commercial organizations have adopted 'contemporary' or 'modern' code review, an informal, regular, tool-based process. Because both OSS and commercial developers spend a significant amount of effort performing code reviews, the primary goal of this dissertation is to better understand contemporary code review, its non-technical benefits, factors that influence its outcomes, and its impact on peer impression formation. To address this goal, this dissertation describes empirical studies using surveys, software repository mining, and social network analysis. The first study is a survey of OSS developers to understand their collaboration and the process by which they form impressions of each other. The results suggest that coding-related factors influence impression formation the most among OSS developers. Therefore, the types of interactions where participants can judge a peer's code or creativity (e.g., code review) should be crucial for peer impression formation. The results of this study motivated the selection of peer code review as the focus of this dissertation. The second study describes a survey of developers from 36 popular OSS projects and from Microsoft about: 1) the code review process in their projects, 2) their expectations from code review, and 3) how code review impacts impressions about their peers. The results suggest that the primary perceived benefit of code review is knowledge sharing, relationship building, better designs, and ensuring maintainable code, as opposed to the expected result of defect detection. Code reviews help build impressions between code review participants. Those impressions not only impact the code review process but also future collaborations among developers. Due to the rarity of face-to-face interactions, OSS developers rely more on the reputation of and relationship with the author during code reviews. Conversely, Microsoft developers focus more on expertise and anticipated efforts. Finally, the third study aims to find the impact of developers' reputation on the outcome of his/her code review requests. The results suggest that developers' reputations help them receive quicker feedback on their review requests, complete reviews in shorter time, and get their code changes accepted. Newcomers to OSS projects suffer the most due to delayed feedback, which may discourage their future participation. A reviewer recommendation system to triage incoming code review requests can be useful to reduce delayed feedback for newcomers. Based on the results from these studies, this dissertation makes recommendations for practitioners to adopt and improve code review practices.Item Cooperation paradigms for overcoming communication limitations in multirobot wide area coverage(University of Alabama Libraries, 2011) Wellman, Briana Lowe; Anderson, Monica D.; University of Alabama TuscaloosaMulti-robot systems are an important research topic in wide area coverage applications such as hazardous waste clean-up, bomb detection, surveillance, and search and rescue missions. They can work in parallel and complete tasks faster than a single robot. Communications can support cooperation to speed up execution, reduce duplication, and prevent interference. Communication among team members is achieved explicitly or implicitly. In explicit communication, messages are intentionally transmitted and received from robot to robot. In implicit communication, robots observe the environment and other robot actions. Although many systems use explicit communications, in exploration of large, open areas (e.g. stadiums and parks), persistent intra-team digital communications is not guaranteed. Therefore, alternative approaches that do not rely upon message passing throughout exploration are needed. Novel contributions of overcoming communication limitations in wide area coverage include: (1) insight on how information shared between robots that are close has more influence on immediate action selection than information shared between robots that are farther apart. Spatial and temporal locality can be instrumental in determining relevance in subsequent action selection; (2) an approach in which observation leverages spatial and temporal locality to infer state rather than rely on digital messaging; and (3) an approach in which robots use spatial rendezvous to exchange information instead of continuously passing messages. Robots explore an environment in sectors, or designated areas, and periodically meet to communicate map information of what they explored. Simulations and physical experiments were conducted and results suggest both approaches can serve as alternatives to cooperation based on continuous point-to-point communications.Item Creation of crash-countermeasure police patrol routes targeting hotspot road segments(University of Alabama Libraries, 2010) Steil, Dana Andrew; Parrish, Allen Scott; University of Alabama TuscaloosaThis dissertation addresses the problem of expressing, executing, evaluating, and engaging patrol routing algorithms that target event hotspots on roadways. An "event hotspot" is a location that is over-represented in some event occurrence, such as crashes, citations or any other event of interest. Recommended patrol routes can be used by organizations such as police agencies, emergency medical responders, and taxi services who patrol roadway segments at proper times to assist or deter their target events. Patrol routing algorithms are used to specify the movements of simulated mobile agents on a roadway system. The patrol algorithms are first expressed using TURN, Technique for Ultimate Route Navigation, our extensible domain specific language (DSL) created for this purpose. Algorithms specified using TURN syntax are then executed and evaluated in a custom simulation environment. Patrol routing algorithms deemed _t for a specific context are then engaged by users via a web-based geographic information systems (GIS) portal. In this dissertation details of the patrol routing model are followed by two case studies. The first case study evaluates agent response times to events when dispatched from region-based staging points. The second case study evaluates several nondeterministic highway patrol routing algorithms according to four metrics: response times, network coverage, hotspot coverage, and hotspot exposure. The case study results demonstrate the applicability of the patrol routing system.Item Cross-Lingual Word Embeddings with Universal Concepts and their Applications(University of Alabama Libraries, 2020) Sheinidashtegol, Pezhman; Vrbsky, Susan; Musaev, Aibek; University of Alabama TuscaloosaEnormous amounts of data are generated in many languages every day due to our increasing global connectivity. This increases the demand for the ability to read and classify data regardless of language. Word embedding is a popular Natural Language Processing (NLP) strategy that uses language modeling and feature learning to map words to vectors of real numbers. However, these models need a significant amount of data annotated for the training. While gradually, the availability of labeled data is increasing, most of these data are only available in high resource languages, such as English. Researchers with different sets of proficient languages seek to address new problems with multilingual NLP applications. In this dissertation, I present multiple approaches to generate cross-lingual word embedding (CWE) using universal concepts (UC) amongst languages to address the limitations of existing methods. My work consists of three approaches to build multilingual/bilingual word embeddings. The first approach includes two steps: pre-processing and processing. In the pre-processing step, we build a bilingual corpus containing both languages' knowledge in the form of sentences for the most frequent words in English and their translated pair in the target language. In this step, knowledge of the source language is shared with the target language and vice versa by swapping one word per sentence with its corresponding translation. In the second step, we use a monolingual embeddings estimator to generate the CWE. The second approach generates multilingual word embeddings using UCs. This approach consists of three parts. For part I, we introduce and build UCs using bilingual dictionaries and graph theory by defining words as nodes and translation pairs as edges. In part II, we explain the configuration used for word2vec to generate encoded-word embeddings. Finally, part III includes decoding the generated embeddings using UCs. The final approach utilizes the supervised method of the MUSE project, but, the model trained on our UCs. Finally, we applied our last two proposed methods to some practical NLP applications; document classification, cross-lingual sentiment analysis, and code-switching sentiment analysis. Our proposed methods outperform the state of the art MUSE method on the majority of applications.Item Customer cost minimization for energy consumption scheduling in smart grid(University of Alabama Libraries, 2015) Gao, Jingcheng; Xiao, Yang; University of Alabama TuscaloosaThe world has been on a fast track of industrial development thanks to human activities. On the back of the same coin is the fact that people are consuming more and more energy to support the fast-paced development. Studies have pointed out an increasing consumption on traditional, non-renewable energies such as coal and oil, while the application of renewable energies, such as wind power and solar power, are still quite far away from mass application because of various restrains. Therefore, it is essential to search for a better way for people to consume energy, especially electricity since it is a necessary every-day energy and it is overwhelmingly generated with non-renewable resources. Researches about smart grid have been quite fruitful with demand response being the most promising research area. A large number of previous studies have been done in the area of real-time pricing schemes and fairness in bill and cost for theses schemes, but real-time demand response using energy consumption scheduling algorithms did not attract much attention until recently because of the two-way communication capacity of smart grid and fair delay problem of the energy scheduling. Also, using optimal stopping rule to model these problems has yet to be studied. Solution to these problems will essentially make demand response program more flexible or even smart grid participation a more attractive choice to customers. This dissertation looks at three problems. The first problem is the cost minimization problem with real-time demand response using energy consumption scheduling modeling in a neighborhood area network. We simulate this problem with discrete event simulation with different sets of parameters, and provide the results analysis under several circumstances. The second problem explores the importance of fairness in terms of delay. A formal concept of delay is defined using the energy scheduling model, and then the problem is formed based on a cost minimization problem with a fairness boundary constraint. The proposed algorithm solves the cost minimization while bounding the delays of all customers. The simulation results show that the algorithm with fair delay has much better performance than the algorithm without fair delay in terms of fairness index metric. In the third problem, we adopt the optimal stopping rule method to model the energy consumption scheduling problem. Then a cost minimization problem with comfortable delay is presented, and an optimal stopping rule based energy consumption scheduling algorithm is proposed to solve this problem. The simulation results show that the optimal stopping rule algorithm has better performance in terms of total cost than a greedy algorithm while satisfying the comfortable level constraint.Item CV-NICS: a lightweight solution to the correspondence problem(University of Alabama Libraries, 2009) Jay, Graylin Trevor; Smith, Randy K.; University of Alabama TuscaloosaIn this dissertation, I present a novel approach for solving the correspondence problem using basic statistical classification techniques. While metrics such as Pearson's rho or cosine similarity would not be powerful enough to solve the correspondence problem directly, their performance can be enhanced by augmenting the scene with random color static via a projector. Over time, this noise increases the statistical independence of imaged points not in correspondence. This allows the reduction of the correspondence problem to a simple similarity search of temporal features. Extensive experiments have shown the approach to be as effective as more complex structured light techniques at producing very dense correspondence data for a variety of scenes. The approach differentiates itself from traditional structured lighting by not relying on known camera or projector geometries, and by allowing relatively lax capturing conditions. Due to the statistically oriented nature of the approach and unlike more recognition focused techniques, the approach is naturally amenable to quality assessment and analysis. This dissertation provides a background on the correspondence problem, presents empirical and analytical results regarding the new technique, and reviews the related work in the literature.Item A data mining approach to identify perpetrators: an integration framework and case studies(University of Alabama Libraries, 2010) Ding, Li; Dixon, Brandon; University of Alabama TuscaloosaData mining and social network analysis have been widely used in law enforcement to solve crimes. Research questions such as strength of ties in social networks, crime pattern discovery and prioritizing offenders have been studied in this area. However, most of those studies failed to consider the noisy nature of the data. The techniques they proposed only have been applied to small scale data sets. Therefore, it is an important task to design a framework that can work on large scale data sets and tolerance noisy data. In this dissertation, we built an integrated crime detection framework that combined two data mining techniques: decision tree and genetic algorithm and graph theories to solve the problems we pointed out. Our crime pattern analysis is based on all offenders of the state of Alabama in the past 50 years. Our constructed social network contains all Alabama residents. It allows us to fully evaluate the proposed models. Two case studies have been conducted to evaluate the framework. One is based on 625 inmates released from Madison county jail in 2004. Our experimental results show that our recommended risk level has strong correlation in predicting future offense. Another case study is based on the 100 real police reports. The experimental results show that the median ranking of arrestees remains at the top 3% of the return list.Item Database consistency in cloud databases(University of Alabama Libraries, 2013) Islam, Md Ashfakul; Vrbsky, Susan V.; University of Alabama TuscaloosaCloud storage service is currently becoming a very popular solution for medium-sized and startup companies. However, there are still few suitable solutions being offered to deploy transactional databases in a cloud platform. The maintenance of ACID (Atomicity, Consistency, Isolation and Durability) properties is the primary obstacle to the implementation of transactional cloud databases. The main features of cloud computing: scalability, availability and reliability are achieved by sacrificing consistency. The cost of consistency is one of the key issues in cloud transactional databases that must be addressed. While different forms of consistent states have been introduced, they do not address the needs of many database applications. In this dissertation we propose a tree-based consistency approach, called TBC, that reduces interdependency among replica servers to minimize the response time of cloud databases and to maximize the performance of those applications. We compare different techniques of maintaining consistency, including the classic approach, the quorum approach and our tree-based consistency approach. We identify the key controlling parameters of consistency maintenance in cloud databases and study the behavior of the different techniques with respect to those parameters. Experimental results indicate that our TBC approach reduces interdependency between data replicas and has good performance. We also implement a transaction management system using TBC as the consistency approach. We have designed a hierarchical lock manager that is able to work at a variable granularity level and allow much more concurrent access to the data items than regular lock managers. The TBC transaction management system ensures serializability and guarantees the ACID properties. The common isolation problems in transaction management are prevented, and we prove that the scenarios of dirty read, unrepeatable read and dirty write or lost update will never occur in concurrent execution of the transactions. We also present an efficient auto-scaling feature for the proposed transaction manager. Our experimental results shows that TBC has better response time than other approaches regardless of the arrival rate, read-write ratio, variation in data selection preference or database size. The Tree-Based Consistency approach is a viable solution for ACID transactional database management in a cloud.Item A demonstration-based approach for domain-specific modeling language creation(University of Alabama Libraries, 2013) Cho, Hyun; Gray, Jeff; University of Alabama TuscaloosaModel-Driven Engineering (MDE) is a promising approach for addressing the issues of complex and large software system development that enables software engineers to develop software systems with high-level abstract models. In MDE, models are first-class entities of software system development and can improve the understanding of problem domains. In addition, models are used to predict the quality and performance of software systems. Within the context of MDE, Domain-Specific Modeling Languages (DSMLs) are developed to describe notions of a specific domain using either textual or graphical syntax. DSMLs provide a language that has abstractions and notations, as well as precise and concise modeling constructs, for specific domains (e.g., automotive, avionics, finance, and etc). DSMLs assist domain experts in describing their problems closer to the problem domain when compared to General-Purpose Modeling Languages (GPMLs), such as Unified Modeling Language (UML) or programming languages. DSMLs have been shown in the literature to provide several benefits, such as productivity improvement, quality improvement, and reduction of miscommunication. However, development of new DSMLs can be challenging and requires much time and effort. In addition, the current state of DSML is still in its infancy compared to the tools and resources available for creation of programming language environments. This dissertation investigates a new approach for DSML creation that allows domain experts to have a more prominent role in describing the languages that they use. The core contributions of the dissertation are focused on three aspects related to domain-specific modeling language creation: 1) enable the creation of DSMLs in a demonstration-based approach by recording and analyzing the operational behavior exhibited by a domain expert as they model notions of their domain, 2) enable domain expert verification of the inferred language by exploring the model space, and 3) enable domain expert verification of the inferred language by exploring the model space.. The objectives and contributions of the research will be explained in detail in this dissertation, combined with case studies from several domain modeling languages to demonstrate how a domain expert can build their own DSMLs in practice.