Theses and Dissertations - Department of Computer Science
Permanent URI for this collection
Browse
Browsing Theses and Dissertations - Department of Computer Science by Author "Brown, Marcus E."
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item Contemporary peer code review practices and associated benefits(University of Alabama Libraries, 2015) Bosu, Amiangshu Shekhar; Carver, Jeffrey C.; University of Alabama TuscaloosaPrior research indicates that peer code review is an effective method for reducing the number of defects and improving the quality of code. Besides maintaining the integrity of the code, code review spreads knowledge, expertise, and development techniques among the review participants. In the recent years, many Open Source Software (OSS) communities and commercial organizations have adopted 'contemporary' or 'modern' code review, an informal, regular, tool-based process. Because both OSS and commercial developers spend a significant amount of effort performing code reviews, the primary goal of this dissertation is to better understand contemporary code review, its non-technical benefits, factors that influence its outcomes, and its impact on peer impression formation. To address this goal, this dissertation describes empirical studies using surveys, software repository mining, and social network analysis. The first study is a survey of OSS developers to understand their collaboration and the process by which they form impressions of each other. The results suggest that coding-related factors influence impression formation the most among OSS developers. Therefore, the types of interactions where participants can judge a peer's code or creativity (e.g., code review) should be crucial for peer impression formation. The results of this study motivated the selection of peer code review as the focus of this dissertation. The second study describes a survey of developers from 36 popular OSS projects and from Microsoft about: 1) the code review process in their projects, 2) their expectations from code review, and 3) how code review impacts impressions about their peers. The results suggest that the primary perceived benefit of code review is knowledge sharing, relationship building, better designs, and ensuring maintainable code, as opposed to the expected result of defect detection. Code reviews help build impressions between code review participants. Those impressions not only impact the code review process but also future collaborations among developers. Due to the rarity of face-to-face interactions, OSS developers rely more on the reputation of and relationship with the author during code reviews. Conversely, Microsoft developers focus more on expertise and anticipated efforts. Finally, the third study aims to find the impact of developers' reputation on the outcome of his/her code review requests. The results suggest that developers' reputations help them receive quicker feedback on their review requests, complete reviews in shorter time, and get their code changes accepted. Newcomers to OSS projects suffer the most due to delayed feedback, which may discourage their future participation. A reviewer recommendation system to triage incoming code review requests can be useful to reduce delayed feedback for newcomers. Based on the results from these studies, this dissertation makes recommendations for practitioners to adopt and improve code review practices.Item Database consistency in cloud databases(University of Alabama Libraries, 2013) Islam, Md Ashfakul; Vrbsky, Susan V.; University of Alabama TuscaloosaCloud storage service is currently becoming a very popular solution for medium-sized and startup companies. However, there are still few suitable solutions being offered to deploy transactional databases in a cloud platform. The maintenance of ACID (Atomicity, Consistency, Isolation and Durability) properties is the primary obstacle to the implementation of transactional cloud databases. The main features of cloud computing: scalability, availability and reliability are achieved by sacrificing consistency. The cost of consistency is one of the key issues in cloud transactional databases that must be addressed. While different forms of consistent states have been introduced, they do not address the needs of many database applications. In this dissertation we propose a tree-based consistency approach, called TBC, that reduces interdependency among replica servers to minimize the response time of cloud databases and to maximize the performance of those applications. We compare different techniques of maintaining consistency, including the classic approach, the quorum approach and our tree-based consistency approach. We identify the key controlling parameters of consistency maintenance in cloud databases and study the behavior of the different techniques with respect to those parameters. Experimental results indicate that our TBC approach reduces interdependency between data replicas and has good performance. We also implement a transaction management system using TBC as the consistency approach. We have designed a hierarchical lock manager that is able to work at a variable granularity level and allow much more concurrent access to the data items than regular lock managers. The TBC transaction management system ensures serializability and guarantees the ACID properties. The common isolation problems in transaction management are prevented, and we prove that the scenarios of dirty read, unrepeatable read and dirty write or lost update will never occur in concurrent execution of the transactions. We also present an efficient auto-scaling feature for the proposed transaction manager. Our experimental results shows that TBC has better response time than other approaches regardless of the arrival rate, read-write ratio, variation in data selection preference or database size. The Tree-Based Consistency approach is a viable solution for ACID transactional database management in a cloud.Item Design and analysis of accountable networked and distributed systems(University of Alabama Libraries, 2013) Xiao, Zhifeng; Xiao, Yang; University of Alabama TuscaloosaThis dissertation focuses on the design and analysis of accountable computing for a wide range of networked systems with affordable expense. The central idea is to incorporate accountability, a long-neglected security objective, into the design and implementation of modern computing systems. Broadly speaking, accountability in the cyber-security domain means that every entity ought to be held responsible for its behavior, and that there always exists undeniable and verifiable evidence linking each event to the liable entities. This dissertation studies accountable computing in three different contexts, including traditional distributed systems, cloud computing, and the Smart Grid. We first propose a quantitative model called P-Accountability to assess the degree of system accountability. P-Accountability consists of a flat model and a hierarchical model. Our results show that P-Accountability is an effective metric to evaluate general distributed systems such as PeerReview [1] in terms of accountability. Next, we develop Accountable MapReduce for cloud computing to prevent malicious working machines from manipulating the processing results. To achieve this goal, we set up a group of auditors to perform an Accountability-Test (A-test) that checks all working machines and detects malicious nodes in real time. Finally, we investigate the accountability issues in the neighborhood area smart grid. A mutual inspection scheme is presented to enable non-repudiation for metering. In addition, we propose and analyze a suite of algorithms to identify malicious meters for the detection of energy theft.Item Facilitating operator interaction with quality of surveillance multi-robot systems(University of Alabama Libraries, 2012) Dawson, Shameka; Anderson, Monica D.; University of Alabama TuscaloosaRobotic systems are becoming more practical in military applications. In fact, unmanned aerial vehicles (UAVs) are being used for surveillance and reconnaissance missions. Using UAVs for surveying a region of interest can increase situational awareness and decrease human casualty by allowing the operator to view the video feeds from the UAVs. However, current systems utilize complex one (or multiple) operator/one robot interfaces. In addition, human-in-the-loop models create issues because human operators tend to intervene more frequently if they do not trust the system or their expectations of the autonomy are not met. As a result, excessive or inept human intervention could negatively affect workload, situational awareness, and performance. This dissertation is aimed at allowing a single operator to efficiently manage multiple UAVs and interact effectively with higher levels of autonomy. By providing the operator with a mechanism to interact with the autonomy and aid in decision making, the operator becomes a part of the autonomous team. Contributions of this dissertation related to facilitating operator interaction with multi-robot surveillance systems include: (1) knowledge that trust is more than an understanding that the actions of an autonomous team are rational, but related to experiencing the actions' rationale; (2) a novel approach to teaming based on spatial and temporal environmental cues; and (3) the design and implementation of a testbed to measure the effect of the operator teaming with an autonomous system. Studies are used to evaluate the system and elucidate factors that affect operator trust. Results suggest that a human operator can team with multiple robots and effectively interact with higher levels of autonomy by experiencing the autonomous team's rationale using environmental cues. This approach using the spatial and temporal environmental cues was also found to promote trust, lower workload, and increase situational awareness while not degrading task performance.Item FAPA: flooding attack protection architecture in a cloud system(University of Alabama Libraries, 2014) Zunnurhain, Kazi; Vrbsky, Susan V.; Hasan, Ragib; University of Alabama TuscaloosaThe rate of acceptance of clouds each year is making cloud computing the leading IT computational technology. While cloud computing can be productive and economical, it is still vulnerable to different types of external threats, one of which is a Denial of Service (DoS) attack. DoS attacks have long been an open security problem of the internet. Most proposed solutions to address DoS attacks require upgrades in routers, modification in the BGP (Border Gateway Protocol), usage of additional control bits in the IP packets, or adjustments to legacy routers in the routing path. It is extremely difficult to manipulate all these criteria, considering that the internet, and potentially a cloud, consists of a very large number of autonomous systems with routers from different vendors deployed over decades. Authentication protocols are typically implemented by some of the leading companies manufacturing DoS prevention routers. However, authentication protocols and embedded digital signatures are very expensive and vulnerable. This is contrary to the benefits of renting a cloud system, which is to save capital expenditure as well as operational expenditure. Rather than depending on cloud providers, we proposed a model, called FAPA (Flooding Attack Protection Architecture), to detect and filter packets when DoS attacks occur. FAPA can be deployed at different levels of the system, such as at the user's end. FAPA can run locally on top of the client's terminal and is independent of the provider's cloud machine. There is no need to deploy any expensive packet capturing tools nor does it require any embedded digital signature inside the packets. There is no additional charge from the provider's end since the application runs in the customer's end. Moreover, automatic message propagation invokes the cloud server to trace the source or adversary. In FAPA, detection of denial of service is handled by the periodic analysis of the traffic behavior from the raw packets. It generates an alarm if any DoS attack is detected and removes flooding by filtering. Because FAPA is employed on the client's side, customers have control over traffic trends, which is absent in other DoS prevention approaches. FAPA is comprised of five individual modules, where each module has an assigned task in detecting DoS attacks and removing threats by filtering the spoof packets. A module fetches the traffic packets and does the unwrapping. Another module records the pertinent parameters of network packets. Implementation of a FAPA prototype and experimental results has demonstrated the feasibility of FAPA. From our initial experiments we observed that in the event of a DoS attack, some of the network parameters change. Hence, in FAPA a separate module is dedicated for storing information about traffic behavior. If FAPA observes any inconsistent traffic behavior, it invokes the filtering modules to remove the compromised network packets. FAPA filtering detects the threat by using previously recorded information. FAPA filtering was implemented for a cluster environment and we ran experiments to determine its effectiveness. The filtering module was then modified to run in a cloud environment and was able to handle a large set of network packets. We investigated the impact of DDoS attacks on co-resident virtual machines and their neighbors. Later we conducted DDoS attacks from a commercially launched public cloud onto private cloud instances to observe the amplification of an attack and checked the efficiency of FAPA in terms of filtering those non legitimate packets. We also measured FAPA performance in terms of false positive and false negative rates. We deployed several commercially used stress testing tools to observe FAPA's performance. Both in the cloud and on the cluster, our experimental results demonstrated that FAPA was able to detect and filter packets to successfully remove a DoS attack.Item Measuring the effects of low assistive vs. moderately assistive environments on novice programmers(University of Alabama Libraries, 2012) Dillon, Edward; Brown, Marcus E.; University of Alabama TuscaloosaImproving the novice's experience with programming has been an important research topic for some time. The high attrition rate of CS majors continues to be a problem. Incoming majors are being exposed to programming but many are driven away from the field. As a way to engage novices with programming, many CS departments have adopted visual environments. However, not all novices are taught to program using visual environments. Typically, students are introduced to programming through either a visual or command line environment at the beginning stages of a CS curriculum. The features in standard command line environments are not as assistive to programmers as visual environments. Novices must learn both language syntax and semantics while navigating the file system and compilation tools. On the other hand, visual environments with highly assistive features could constrict a novice to learn a fixed set of foundational programming skills that exclude exposure to syntax checking, compilation and file systems. Novices will eventually need to move to a less assistive environment to round out their skill set. The objective of this research was to determine if certain environments are more appropriate for teaching novices how to program, based on their respective levels of feature assistance. There are anecdotally based motivations for using either tools with low assistive features like command line environments (promotes acquisition of useful mental models) or tools with moderate to high assistive features like visual environments (engages novices while programming). Unfortunately, no systematic study exists that supports either supposition. This research was composed of three studies for evaluating environments with varying feature sets: a high school outreach, a CS1-Laboratory Study, and a CS1-Study. Engagement, comprehension, efficiency, and usability were used as measures to evaluate the environments during these studies. Overall, this research showed that a moderately assistive environment imposes a lower learning curve for novices, while a low assistive environment appears to broaden their understanding of programming.Item Network security: design, analysis and tradeoff evaluation(University of Alabama Libraries, 2009) Olteanu, Alina; Xiao, Yang; University of Alabama TuscaloosaEnergy efficiency is an essential requirement for all wireless devices. Recent developments in wireless sensor networks (WSNs), wireless local area networks (WLANs) and wireless personal area networks (WPANs) have raised a demand for energy-efficient algorithms and energy-efficient medium access control (MAC) protocols. When considering security in this context, additional overhead is added to the network and efforts must to be made to minimize the extra load while at the same time achieving the desired level of security. Security attacks in the Internet are linked to a different set of vulnerabilities. The complex architecture of the Internet spanning over different administrative domains and legal systems makes it easy for attackers to conceal the source of the attack and preserve their anonymity. This dissertation addresses several important issues in network security and performance including intrusion detection, cipher design, security overhead analysis and tracing, as follows. We first propose a model for intrusion detection in WSNs, which optimizes network coverage and detection while minimizing the number of sensors and energy consumption. We then integrate a security mechanism into the sensor network in order to achieve secure communication. Specifically, we propose a lightweight block cipher based on a multiple recursive generator (MRG) which is suitable for WSN and RFID where power consumption, bandwidth, memory and storage space are critical. Next, we consider security in WLANs and WPANs and we apply the advanced encryption standard (AES) cipher to ensure secure transmission of frames. We integrate AES encryption at the MAC layer of 802.11 WLANs and 802.15.3 UWB WPANs, respectively, and study the overhead introduced by AES in this context. Finally, we analyze a type of security attack in the Internet where the intruder uses a chain of host machines before attacking the target. We discuss two mechanisms for tracing intruders in the Internet, one based on thumbprinting, and the other on a timestamping technique of transmission activities.Item Programming by voice: a hands-free approach for motorically challenged children(University of Alabama Libraries, 2015) Wagner, Amber; Gray, Jeff; University of Alabama TuscaloosaComputer Science (CS) educators frequently develop new methodologies, languages, and programming environments to teach novice programmers the fundamental concepts of CS. A recent trend has focused on new environments that reduce the initial challenges associated with the heavy syntax focus of textual programming languages. There are numerous Initial Programming Environments (IPEs) available that have been created for student use that in some cases have fostered self-discovery and inquiry-based exploration. In this dissertation, three IPEs are discussed: Scratch (2015), Lego Mindstorms (2015), and Blockly (2015). Although the block-based nature of IPEs can be helpful for learning concepts in CS, a small group of students (approximately 5%) is being left out from learning experiences and engagement in CS due to block-based environments’ dependence on the Windows Icon Mouse Pointer (WIMP) metaphor. Block-based environments often require the use of both a mouse and keyboard, which motorically challenged users often are unable to operate. Based on research performed and presented in this dissertation, a Vocal User Interface (VUI) is a viable solution that offers a “Programming by Voice” (PBV) capability (i.e., a capability to describe a program without using a keyboard or mouse). However, adapting legacy applications can be time consuming, particularly, if multiple applications (such as the three IPEs previously mentioned) require specialized VUIs. Each environment has its own visual layout and its own commands; therefore, each application requires a different VUI. In order to create a more generic solution, a Domain-Specific Language (DSL) can be applied to create a semi-automated process allowing a level of abstraction that captures the specific needs of each IPE. From the specification of each IPE, a customized VUI can be generated that integrates with the legacy application in a non-invasive manner. The nine chapters included in this dissertation were motivated by the following four research questions: 1. How can we improve initial programming instruction? 2. Can all children participate in programming instruction? 3. How do we implement PBV to allow children to take advantage of creative, block-based programming environments? 4. What are some potential ideas that can assist in generalizing the process of voice enabling IPEs?Item Software engineering for enabling scientific software development(University of Alabama Libraries, 2015) Heaton, Dustin; Carver, Jeffrey C.; University of Alabama TuscaloosaScientific software is code written by scientists for the purpose of doing research. While the results of this software development have been widely published, there has been relatively little publication of the development of this software. There have been even fewer publications that look at the software engineering aspects of scientific software development and fewer still that have suggested software engineering techniques that will help scientists develop the software that is relied on for much of our modern knowledge. The software engineers who have studied the development processes of scientific software developers agree that scientists would be able to produce better software if they had the knowledge and familiarity to use specific software engineering practices. The primary focus of this dissertation is to provide that knowledge to scientific software developers in order to better enable them to produce quality software as efficiently as possible. In order to achieve this focus, this dissertation has three aspects. First, this dissertation provides a literature review of the claims that have been made in the software engineering and scientific software literature culminating in a list of claims about software engineering practices. Scientific software developers can use this list to find practices they are unaware of that should prove useful to their development. Additionally, software engineers can use the list to help determine what practices need support for the scientists to be able to take advantage of them. Second, this dissertation provides a series of surveys that capture the current state of software engineering knowledge in the scientific software development community. The results of these surveys show that scientific software developers are unfamiliar with many of the practices that could help them address their most challenging issues. Third, this dissertation provides examples that show, with support from software engineers, scientific software developers can take advantage of practices that have proven useful in traditional software engineering and increase the quality of their work without requiring an overwhelming amount of extra work.Item Which environment is more suitable for novice programmers: editor/command line/console environment vs. Integrated Development Environment(University of Alabama Libraries, 2009) Dillon, Edward, Jr.; Brown, Marcus E.; University of Alabama TuscaloosaWhen novice programmers begin programming, they face many problems due to the lack of programming experience. Integrated Development Environments are used as a way to help novices become more effective at learning to program. The question is whether or not such an environment is more effective when compared to a command line/console environment. Therefore, this study tried to address this question by performing interviews with students who were using these environments. This study was composed of two groups of undergraduate students who were taking courses in Computer Science. Group one consisted of students who were involved in a course sequence beginning with the Microsoft Visual Studio IDE, then a command line environment for the last course in the sequence. The second group started programming with a command line environment. Interviews were conducted with both groups in order to gain information about these environments. The information retrieved showed that the Microsoft Visual Studio IDE is favored based on the students' responses to the questions. However, there was not enough significant differences amongst the results to say that an IDE in general is better than a command line environment. It was the intent that this information provided not only background information but also served as potential foundational evidence for determining which environment may be more suitable for novice programmers to use for programming. This information will also be used as a basis for further research and studies in this area.