Home > Research > Research Project Descriptions

Research Project Descriptions

ARI

Applied Research Institute

Alf Weaver, Anthony Gadient and Kent Schlussel

The ARI is a consortium of UVa faculty from engineering, medicine, and the sciences who focus on problems of national importance, especially those related to homeland security and defense intelligence procurement and analysis. To date we have contracts related to underwater propulsion of unmanned vehicles, cybersecurity, special-purpose integrated circuits, and cutting-edge threat detection. ARI has partnered with the Defense Intelligence Agency and a number of its contractors to conduct applied research in the areas of cybersecurity; biodefense; illicit drug interdiction; detection of improvised explosive devices; chemical, biological, radiological, nuclear, and explosive (CBRNE) technologies; rapid technical collection; biometrics; and solutions for intelligence analysts.

Project Homepage

Bioinformatics

Computational biology for the new millennium

Bill Pearson and Gabriel Robins

We are addressing important problems in computational biology, including the construction of evolutionary (phylogenetic) trees over a given set of taxa (e.g., organisms, DNA molecules, etc.), where the optimization objective entails the best-fit of tree topologies under a least-squares or parsimony criteria. Another problem that we have investigated is the primers selection problem, which arises in polymerase chain reaction (PCR) experiments (i.e., DNA cloning). Here, given a set of DNA strands and a cost criterion, our objective is to find a minimal covering set of compatible DNA subsequences (primers) having least cost, in the hope of discovering new proteins. We are also researching the topographies of solution space landscapes, as well as new heuristics for multiple sequence alignment.

Project Homepage

Calamari

Localization in Large Wireless Sensor Fields

Kamin Whitehouse

Wireless sensor networks consist of many small and cheap computational nodes that can sense the environment, communicate with neighboring nodes, and perform simple computations on sensor data. They may be deployed outdoors in large sensor fields to help detect and control the spread of wild fires, to detect and track enemy vehicles, or for environmental monitoring and precision agriculture. In many such application, each node requires location information to properly interpret its own sensor data and to act according to its placement in the world and in the network. Calamari is a system for providing each node in a sensor field with its own location information. Multi-hop sensor field localization is challenging because of the complex interaction between ranging error, the localization algorithm, and the actual network topology. Calamari predicts and explores these interactions using empirical characterizations and trace-based simulations.

Project Homepage

Calibration

Macro-calibration in Sensor/Actuator Networks

Kamin Whitehouse

Instead of using a single highly-engineered sensor, sensor networks monitor the world through hundreds of cheap assembly-line components. Many such devices have poor factory calibration and drift over time during use. Because sensor networks can contain thousands of such sensors and can be influenced by the environment, they must self-calibrate after deployment. Unfortunately, pairing each sensor with an actuator to provide a known stimulus does not solve this problem because the actuators must also be calibrated. We developed a technique in which each node calibrates its sensor using the actuators of all of its neighbors. This creates an over-constrained system from which calibration coefficients can be inferred even in the face of some noise from both the sensors and actuators. We demonstrated the calibration technique on a 36-node testbed with microphones and acoustic actuators.

Project Homepage

CoCo

Continuous Compilation

Mary Lou Soffa and Jack Davidson

Today's challenge for program optimization research is to develop new techniques and approaches that yield performance improvements that go beyond today's small single digit improvements. In this research, we address this challenge by investigating and developing an innovative framework and system for continuously and adaptively applying optimizations. Our system, the Continuous Compiler (CoCo), applies optimizations both statically at compile-time and dynamically at run-time using optimization plans developed at compile time and adapted at run time. Rather than focusing on developing new optimization algorithms (e.g., a new register allocation algorithm, a new loop interchange algorithm) or improving existing optimizations (e.g., better coloring heuristics, better placement algorithms), this project focuses on understanding the interaction of existing optimizations and the efficacy of static and dynamic optimizations. Using this knowledge along with information about the application gathered by static analysis, profile information and monitoring, CoCo determines how to apply a suite of optimizations so that the optimizations work in concert to yield the best improvements.

Project Homepage

Collision Detection

Exploiting the Capture Effect for Collision Detection and Recovery

Kamin Whitehouse

Besides complex techniques like CDMA, most wireless MAC protocols today lose packets during collisions and cannot differentiate between packet collisions and channel noise. Therefore, they unnecessarily avoid simultaneous transmission even though many radios exhibit the Capture Effect - the ability to correctly receive a strong signal from one transmitter despite significant interference from other transmitters. We developed a technique to exploit this effect to cheaply detect and recover from packet collisions using only simple, low-power radios that are commonly used in sensor networks. This technique differentiates between collisions and channel noise, recovers one of the packets entirely, and identifies the colliding transmitters. This enables more aggressive MAC protocols that transmit concurrently with neighbors; if a collision occurs, one of the packets will actually get through and feedback can be used to request the lost packets and/or to prevent future collisions. We demonstrated the collision detection and recovery technique on a 28-node testbed, and experiments with new MAC protocols yield 30% gains in bandwidth and latency.

Project Homepage

ComponentOS

Component Based OSs for Embedded Systems

Jack Stankovic and Marty Humphrey

The use of component based software for constructing and tailoring embedded systems has promise. However, most third party components are too heavyweight and don't explicitly address time, memory, reliability, power, cost, and other cross cutting constraints. A key problem, and the one where most people have spent their energies, is developing the components themselves without fully addressing how they interact with other components or how they fit into a components infrastructure. While significant work has been done in developing components, less has been done with components suitable for embedded systems. Our research in developing component based SOS for embedded systems can be described in three key areas: developing and implementing domain specific components and component infrastructures, developing configuration tools to help in composition of embedded systems, and non-functional analysis to assess, for example, real-time and fault tolerance capabilities.

Project Homepage

Computational Geometry

Where algorithms and geometry meet

Gabriel Robins

We are addressing open practical issues in computational geometry, an area that lies at the interface between the fields of geometry and algorithms. Recent projects include devising new algorithms with improved approximation ratios for the Group Steiner problem in general graphs, as well as the development of a heuristic with the currently best-known approximation bound for the general graph Steiner tree problem. We have also formulated and addressed a new and practical time-dependent variant of the classical Traveling Salesman Problem, which led to a number of follow-up works. Other results include upper and lower bounds on the maximum degree of minimum spanning trees under various metrics, and efficient algorithms for pattern recognition in point sets, which are applicable to landmine detection. Our work is motivated and driven by real-world needs and applications.

Project Homepage

ControlWare

Feedback Performance Control in Software Services

Tarek Abdelzaher and Jack Stankovic

Software systems are becoming larger and more complex. At the same time, they are being deployed in applications where performance guarantees are required. Feedback control theory is a promising technology for performance control in software systems. We are developing solutions using feedback control for a variety of systems and organizing the results in a manner that we hope establishes a theory and practice of feedback control in computer systems. To this end we have built ControlWare. ControlWare is a middleware QoS-control architecture based on control theory. ControlWare allows the user to express QoS specifications, maps these specifications into appropriate feedback control loops, tunes the loop controllers analytically and connects the loops to the right sensors and actuators such that the desired QoS is achieved.

Project Homepage

Crowdsourcing

Using Crowdsourcing to Enhance Crisis Management

Joe Boyle and Alf Weaver

As disasters continue to occur across the globe, the need to manage and control these situations has come into prominence. In the event that a natural disaster or terrorist attack strikes, it would be valuable for both individuals and officials overseeing the response to have access to the best possible information regarding what has happened, what is happening currently, what could possibly happen in the future, and where these events are occurring. Crowdsourcing, the use of a group of people or community to achieve a desired goal, is a potential method for gathering this information. In this paper, we present the design and implementation of a crowdsourcing application suite that facilitates the gathering and displaying of information from individual citizens during a crisis. We explore how the system is used and potential improvements for the future.

Project Homepage

DBP

Dynamic Binary Parallelization

Mary Lou Soffa and Kamin Whitehouse

This research is exploring the parallelization of software by analyzing and modifying the binary instruction stream during execution. We will produce a novel technique called dynamic binary parallelization (DBP) which can parallelize software without access to the source code. DBP is based on the insight that many programs tend to frequently repeat long sequences of instructions called hot traces. DBP monitors a program at run time and dynamically identifies the hot traces, parallelizes them, and caches them for later use so that the program can execute in parallel every time a hot trace repeats. Ph.D. student: Jing Yang

Project Homepage

DDAM

Data-Driven Appearance Models for Computer Graphics

Jason Lawrence

Over the last decade, the computer graphics community has witnessed a significant increase in the availability of measured (or non-parametric) appearance data. Although directly using these measurements to shade a surface can provide a level of realism that is difficult to achieve with traditional analytic models, fully incorporating non-parametric appearance data into a traditional computer graphics pipeline still presents several research challenges. The goal of this project is to develop new representations for non-parametric appearance data that address some of these open research challenges. We have designed representations that support both interactive rendering and efficient sampling in the context of physically-based rendering. We also investigate more general representations that allow a user to edit the variation in a high-dimensional function while retaining its fidelity to the original measurements.

Project Homepage

Dependability

Reliable, Trusted and Maintainable software

John Knight, David Evans and Jack Davidson

Dependability of a computing system is the ability to deliver service that can justifiably be trusted. Dependability is the system property that integrates such attributes as reliability, availability, safety, security, and maintainability. We focus on research issues related to software and architectures in high-value systems - computing systems of extreme importance to society whose failure would have a severe negative impact whether measured in terms of time, money or loss of life. Our research interests encompass safety-critical systems, such as medical devices, avionics, weapons systems; critical infrastructures such as financial networks, transportation systems, and power systems; and emergent grid computing systems that increasingly play a strategically vital role in such diverse industries as finance, health care, pharmaceuticals, and aerospace. We work closely with industrial and governmental partners to ensure research relevance. Past and current members of our group continue to provide leadership in academia, industry, and the military.

Project Homepage

Diskpower

Energy-Efficient Storage Systems

Sudhanva Gurumurthi

We are at an interesting crossroads in computer architecture. Over the past two decades, there have been several advances in processor design, from microarchitectural techniques to boost instruction- and thread-level parallelism, to the shift from the use of single-core to multicore processors. The number of cores on the die is expected to continue growing in the future, thereby providing a tremendous amount of processing power within a single chip. At the same time, applications are also becoming more data intensive. There already exist several applications that handle massive amounts of data and are used by millions of people every day. These include traditional data intensive applications, such as transaction processing, email, and search engines as well as newer applications spanning areas such as social networking, photo and video sharing. The number of such applications is growing all the time and so is the amount of data they handle. This confluence of trends in architecture and applications requires computer architecture support to facilitate the storage of massive amounts of data and providing efficient access to them and also to efficiently transport the data between storage and the processors. Designing such computer architectures in an energy-efficient manner is a major challenge. The goal of this project is to design energy-efficient storage systems and storage-centric architectures for data intensive applications, as well as develop techniques for designing and configuring storage systems to boost energy efficiency. A few contributions of this project to date include Dynamic RPM modulation to provide multiple performance/energy operating points for disk drives, Intra-Disk Parallelism to exploit I/O parallelism inside a disk drive, sensitivity-based optimization to craft storage energy management policies, and active storage architectures that facilitate storage-centric computation to be offloaded to processors closer to the storage devices.

Project Homepage

DRPM

Power and Temperature Optimization of Storage Systems

Sudhanva Gurumurthi

Denser and faster silicon integration has only exacerbated the problem of moving data to and from the storage subsystem. With applications getting larger and becoming more data-centric, the I/O subsystem has become a compelling target of optimization. We investigate how to manage power and temperature issues in storage systems. We proposed the use of a novel disk drive design, called "DRPM", where the disk can rotate at multiple speeds. This allows a dynamic tradeoff power for performance at a finer granularity rather what is done traditionally, i.e., either complete spindown where no I/O is possible, or else full-speed operation. DRPM is now widely researched by both the computer architecture and operating systems communities. Our studies also indicate that DRPM is likely to become one of the best solutions to effectively tackle temperature issues in disk drives, especially as computers are moving into an era where disk performance will increasingly rely on RPM rather than recording surface density.

Project Homepage

Eos

Aspect-Oriented Extension to C#

Kevin Sullivan

The Eos project is an aspect-oriented extension to C#. Eos improves upon the current aspect-oriented language model in three dimensions. First, it generalizes aspect instantiation & advice weaving model to eliminate the need for the work-arounds that are unavoidable today when aspects are used to express integration concerns. Second it generalizes the join point model. Third it eliminates the distinction between class and aspect constructs in favor of a single conceptual building block that combines the expressive capabilities of current classes and aspects, significantly improving conceptual integrity and uniformity in language design.

Project Homepage

FASTA

Biological Sequence Comparison

Bill Pearson

Rapid concurrent development of molecular cloning techniques, DNA sequencing methods, efficient sequence comparison algorithms, and computer workstations has revolutionized the role of biological sequence comparison in molecular biology. Sequence comparison is now used routinely as the first characterization of a DNA or protein sequence and is an essential part of the human genome project. We developed FASTA, one of the first widely used programs for searching protein and DNA sequence databases. Currently, we are developing and testing sequence comparison methods that will allow us to "push back" the evolutionary horizon - the distance at which related protein sequence can be reliably detected - from the current 800-1,000 million years to more than 2,000 million years. We also use genome-scale sequence comparison to identify potential "young" proteins - proteins that emerged over the last 200-400 million years.

Project Homepage

FATS

Wireless Privacy Attacks

Kamin Whitehouse

Jack Stankovic

We demonstrated that we can eavesdrop on wireless devices in a home and extract private information, even when all of the wireless data is encrypted, with a Fingerprint and Timing-based Snooping attack, or a FATS attack. Experiments on four houses demonstrate that we can infer when and how often the bathroom and kitchen are visited, when the person is sleeping, and when the home is occupied with 90-100% accuracy.

Project Homepage

Feedback Control

Real-Time Scheduling

Jack Stankovic, Sang Son, Gang Tao and Tarek Abdelzaher

Despite the significant body of results in real-time scheduling, many real world problems are not easily supported. While algorithms such as Earliest Deadline First, Rate Monotonic, and the Spring scheduling algorithm can support sophisticated task set characteristics (such as deadlines, precedence constraints, shared resources, jitters, etc.), they are all open loop scheduling algorithms. Open loop refers to the fact that once schedules are created they are not adjusted based on continuous feedback. While open-loop scheduling algorithms can perform well in static or dynamic systems in which the workloads can be accurately modeled, they can perform poorly in unpredictable dynamic systems. In this project we are developing a theory and practice of feedback control real-time scheduling. Feedback control real-time scheduling defines error terms for schedules, monitors the error, and continuously adjusts the schedules to maintain stable performance. We have developed a practical feedback control real-time scheduling algorithm, FC-EDF, which is a starting point in the long-term endeavor of creating of theory and practice of feedback control scheduling. We are applying out results to applications such as web servers, agile manufacturing, and defense systems.

Project Homepage

Flash Flooding

Rapid Flooding in Mesh Networks

Kamin Whitehouse

The Flash flooding protocol exploits the capture effect to reduce flooding latency by allowing nodes to propagate the flood simultaneously, thereby eliminating neighborhood contention. Our results indicate that Flash can reduce latency by as much as 80%, achieving flooding latencies near the theoretical lower bound without sacrificing coverage, reliability or power consumption.

Project Homepage

FPCA

Field Programmable Core Array

Kevin Skadron, Ben Calhoun, and John Lach

This project is developing novel ways to treat simple cores as building blocks, allowing a multicore processor to reconfigure by combining cores in various ways to provide SIMD, federated, and other organizations.

Project Homepage

Galileo

An Advanced Fault Tree Analysis Tool

Kevin Sullivan

The Galileo Fault Tree Analysis Tool is an experimental system being built within our Package-Oriented Programming (POP) project. Galileo supports advanced fault tree analysis capabilities developed under the direction of Professor Joanne Bechta Dugan. Specifically, Galileo supports Dugan's DIFtree analysis method. Package-Oriented Programming is a research project investigating the reuse and integration of very large-scale components. By very large, we mean "on the order of a million lines of code equivalent." We are exploring the use of shrink-wrapped software applications as massive building blocks, because it appears that new package architectures are sufficiently promising (but also lacking in certain essential ways) to warrant considerable research attention. Key issues include the design of mechanisms supporting restriction, specialization, extension and integration of packages. Our research addresses several issues, including (1) architectural styles for POP systems; (2) generalizing the architectural approaches that appear to make POP a reasonably successful approach to large-scale reuse and integration; and (3) investigating the software development life-cycle for systems developed in the POP style.

Project Homepage

Genesis

A Framework for Achieving Component Diversity

John Knight, David Evans, Jack Davidson and Anh Nguyen-Tuong

We seek to reproduce the genetic diversity found in nature by deliberately and systematically introducing diversity in software components. The hope is that while the phenotype of software components will be similar (its functional behavior), its genotype will contain enough variations to protect the population against a broad class of diseases (attacks, aging). As our engine of software diversity, we will use a systematic and comprehensive methodology based on two fundamental and complementary approaches: design diversity and data diversity, Design diversity is the creation of multiple implementations of a given specification such that the different implementations have different designs. Data diversity is the use of multiple copies of a single implementation with each copy operating on different input data but yielding the same desired results. In data diversity, the different data streams are produced by a process known as data re-expression. Each diversity approach will be applied systematically at multiple levels of software representation to produce a spectrum of techniques for the creation of diverse software components.

Project Homepage

Hood

A Distributed Programming Abstraction

Kamin Whitehouse

Traditional distributed programming abstractions such as shared memory are difficult to apply to sensor networks because of the unreliable, low-bandwidth, geographically-constrained communication model. Instead, many distributed sensor network algorithms are based on the concept of a neighborhood, in which each node selects a set of important neighbors and maintains state about them. However, programmers must still implement neighborhoods in terms of their component parts: messaging, data caching, and membership. We developed an abstraction called Hood that unifies these fundamental concepts as a distributed programming primitive for sensor networks. Using Hood, a neighborhood can be defined by a set of criteria for choosing neighbors and a set of variables to be shared, such as a one-hop neighborhood over which light readings are shared, and a two-hop neighborhood over which both locations and temperature are shared. Once the neighborhoods are defined, Hood provides an interface to read the names and shared values of each neighbor. Beneath this interface, Hood hides the complexity of discovery and data sharing. This abstraction captures the common usage of neighborhoods in many publicly available sensor network applications, and was used to implement some of TinyOS's largest applications.

Project Homepage

HotLeakage

A Software Model of Leakage

Kevin Skadron and Mircea Stan

HotLeakage is a software model of leakage based on predictive technology data. It is publicly available on the web, computationally very simple, can easily be integrated into popular power-performance simulators like Wattch, can easily be extended to accommodate other technology models, and can easily be used to model leakage in a variety of structures (including caches, among others). It extends the Butts-Sohi model and corrects several important sources of inaccuracy. Our model is called HotLeakage because it includes the exponential effects of temperature on leakage. Temperature effects are important, because leakage current depends exponentially on temperature, and future operating temperatures may exceed 100 degree C. HotLeakage also includes the heretofore unmodeled effects of supply voltage, gate leakage, and parameter variations. HotLeakage has circuit-level accuracy because the parameters are derived from transistor-level simulation (Cadence tools). Yet like simplicity is maintained by deriving the necessary circuit-level model for individual cells, like memory cells or decoder circuits, and then taking advantage of the regularity of major structures to develop abstract models that can be expressed in simple formulas similar to the Butts-Sohi model.

Project Homepage

HotSpot

An Accurate and Fast Architectural Thermal Model

Kevin Skadron and Mircea Stan

With power density and hence cooling costs rising exponentially, temperature-aware design has become a necessity. Processor packaging is becoming a major expense, and for many chips can no longer be designed for the worst case. Furthermore, simple estimates of power dissipation are not a good proxy for direct measurement or simulation of temperature. HotSpot is an accurate and fast thermal model suitable for use in architectural studies. It is based on an equivalent circuit of thermal resistances and capacitances that correspond to microarchitecture blocks and essential aspects of the thermal package. The model has been validated using finite element simulation and data from actual chips. HotSpot has a simple set of interfaces and hence can be integrated with most power-performance simulators. The chief advantage of HotSpot is that it is compatible with the kinds of power/performance models used in the computer-architecture community, requiring no detailed design or synthesis description. HotSpot makes it possible to study thermal evolution over long periods of real, full-length applications.

Project Homepage

Info Tech

Information Technology for Mobile and Web Based Systems

Sang Son

As computer technology becomes more integrated into society, information management for human activities necessitates computing that responds to requests in real-time. Many information systems are now used to monitor and control appliances as well as large complex systems which must have predictable and timely behaviors. Millions of users will soon be carrying mobile computers that access the Internet wirelessly. To support those users in applications such as web-based information systems, e-commerce, and Internet appliances, providing consistent data and transaction services in real-time with acceptable quality and security will be a key requirement. This project aims to design and evaluate models and methodologies for developing robust and responsive information systems for those new applications. Current research issues include timeliness and predictability, adaptive QoS management, flexible security paradigms, data consistency in mobile environments, and applying real-time and information technology in web-based systems.

Project Homepage

IPA

Inexpensive Program Analysis

David Evans

Despite considerable progress in program analysis tools, typical software development processes make little use of advanced analysis techniques, even when they are producing mission critical software. Our group is exploring analysis approaches that combine dynamic and static analysis techniques. We co-organized the Workshop on Dynamic Analysis (25 May 2004 at ICSE). Software we produced includes Splint, a tool for statically checking C programs for security vulnerabilities and coding mistakes, and Perracotta, a dynamic analysis tool for automatically inferring temporal properties of a target program. With minimal effort, Splint can be used as a better lint. If additional effort is invested adding annotations to programs, Splint can perform stronger checking than can be done by any standard lint. Perracotta takes the program's execution traces as input and outputs a set of likely temporal properties. The properties can enhance program understanding, be used to aid program evolution, or be used as input properties for a model checker. We have investigated the mining of temporal API rules from imperfect traces, the automatic inference of temporal program properties, differential program analysis, statically detecting likely buffer overflow vulnerabilities and dynamic memory errors, and program evolution.

Project Homepage

Isoluminance

Isoluminant Color Selection for Non-Photorealistic Rendering

Jason Lawrence

We developed a color selection algorithm for non-photorealistic rendering applications. The human visual system processes luminance information in a scene separately from its chrominance. By using colors that have similar luminance but vary in color, skilled artists have exploited this property in order to create perceptual tension in their work. We presented a method for automatically selecting colors that exhibit this isoluminance property. We apply this color selection algorithm to both existing and novel non-photorealistic rendering image filters. We introduced a modified pointillist and image-mosaic filter along with a new image filter inspired by the work of the modern artist Chuck Close.

Project Homepage

Isotach

Concurrency control without locks or barriers

Paul Reynolds

Isotach systems are distributed or parallel computer systems with special support for interprocess coordination. Isotach systems provide low cost ordered message delivery distributed databases, cache coherence in multiprocessors, distributed shared memory, parallel production systems, and distributed control systems. Isotach systems also provide support for fault-detection and check pointing. Isotach systems provide these benefits by maintaining a logical time system that enables processes to predict and control the logical time at which its messages are received. This control is a powerful mechanism for interprocess coordination in a parallel or distributed computation.

Project Homepage

Jazz

Demand-Driven Structural Testing with Dynamic Instrumentation

Mary Lou Soffa

Producing reliable and robust software has become one of the most important software development concerns in recent years. Testing is a process by which software quality can be assured through the collection of information. While testing can improve software reliability, current tools typically are inflexible and have high overheads, making it challenging to test large software projects. Jazz is a new scalable and flexible framework for testing programs with a novel demand-driven approach based on execution paths to implement test coverage. This technique uses dynamic instrumentation on the binary code that can be inserted and removed on-the-fly to keep performance and memory overheads low. The Jazz software testing tool can do def-use, branch, and node coverage tests in the Jikes Java research virtual machine and just-in-time compiler. Jazz is a new and improved implementation of the SoftTest framework.

Project Homepage

LAVA

Laboratory for Computer Architecture at Virginia

Kevin Skadron

Architectural innovation is vital to continued improvement in processor performance. However, limits in efficient ILP extraction and constraints on power delivery, cooling, and energy efficiency have all led to multicore and manycore architectures, as well as growing use of accelerators such as GPUs. The integration of these issues poses a fantastically complex problem space, encompassing the types of cores to be used, the number of each type, their interconnect and cache hierarchy, as well as programming models for managing these diverse and possibly heterogeneous cores. The LAVA group is exploring all these issues. Because simulation is so important to our work, the LAVA group is also exploring new ways to make simulations fast and accurate. We also have the opportunity to build test chips or prototype architectural ideas using FPGAs. Much of the LAVA Lab's work is in conjunction with the computer engineering faculty.

Project Homepage

Legion

World-Wide Virtual Computer

Andrew Grimshaw

Legion is an object-based, meta-systems software project at the University of Virginia. From the project's beginning in late 1993, the goal of the Legion Group has been to create a highly usable, efficient, and scalable system founded on solid principles. We have been guided by our own work in object-oriented parallel processing, distributed computing, and security, as well as by decades of research in distributed computing systems. Our system addresses key issues such as scalability, programming ease, fault tolerance, security, site autonomy, etc. Legion is designed to support large degrees of parallelism in application code and manage the complexities of the physical system for the user. The first public release was made at Supercomputing '97, San Jose, California, on November 17, 1997. Legion is an "open" system that has been designed to allow and encourage third party development of applications, run-time library implementations, and "core" system components.

Project Homepage

MacroLab

Vector-based macroprogramming

 

Kamin Whitehouse

Westley Weimer

MacroLab allows a user to write a single macroprogram for an entire Cyber-Physical System. It is the first system that can perform automatic, topology-specific decomposition of programs describing parallel operations on parallel data structures.

Project Homepage

Marionette

Embedded Wireless Debugging

Kamin Whitehouse

Amain challenge to developing applications for wireless embedded systems is the lack of run-time visibility and control. We developed a tool suite called Marionette which provides an interpreter through which the network operator can remotely call the node's functions, read and write its variables, and access its enumerations and data structures. This interactive and scriptable environment allows the developer to seamlessly trade off between between writing application logic on the PC for simplicity and debugging or writing it on the nodes for increased efficiency. This greatly eases the development process, and was shown to reduce code size in existing applications by up to 75%. Marionette is actively being used for development on several testbeds of up to several hundred nodes. Marionette builds upon earlier work on the Matlab/TinyOS development environment, which has been in the main TinyOS distribution for years.

Project Homepage

Marple:

A Demand-Driven Path-Sensitive Framework to Detect, Diagnose and Test for Software Vulnerabilities

Mary Lou Soffa

This project is developing a practical, path-based framework to statically detect and then diagnose software faults. The techniques are path-based in that both detecting and reporting faults use path information. An important contribution of the work is the development of a demand-driven analysis that effectively addresses scalability challenges faced by traditional path-sensitive analyses. A prototype tool, Marple, is being developed to experimentally evaluate the research.

Project Homepage

MetroNet

Social Computing in Urban Communities

Kamin Whitehouse

 

he MetroNet project will consist of sensors deployed in the storefront windows of downtown Charlottesville. The sensors can be used by the store owners, city customers, city planners, real estate customers, etc.

Project Homepage

Mitigating Contention

Compiling to Mitigate Contention for QoS

Mary Lou Soffa

To fully utilize current multicore processors, colocation of multiple applications on the same platform is necessary. However, contention for shared memory resources among colocated applications on a multicore can cause a great amount of performance degradation. The performance degradation poses a signi cant challenge for achieving an application's quality of service (QoS) goal on multicore platforms. This work proposes a novel compilation approach to mitigate the resource contention for addressing the QoS challenges on modern multicore architectures. In essence, the proposed approach specializes and adapts code layouts to throttle an application's memory accesses to enforce the QoS priorities. Ph.D student" Lingjia Tang

Project Homepage

Mutation Routing

Mobile-to-Mobile Routing in Distributed Applications

Kamin Whitehouse

One of the biggest challenges in a distributed tracking application such as the Pursuer-Evader Game (PEG) is mobile-mobile routing: routing from a source node to a destination while both are changing location in the network topology. Mutation Routing is an algorithm that finds a route once and then mutates it when nodes move. The algorithm can be proven never to mutate to an inconsistent state and, because mutation eliminates the need for control message and new route discovery overhead, it increases the overall bandwidth on the route. One disadvantage to this algorithm, however, is that routes can mutate to sub-optimal, snake-like patterns. Mutation routing therefore trades energy-efficiency and latency for higher bandwidth and lightweight route maintenance. Our work investigates hybrid routing algorithms that can choose the ideal operating point in this tradeoff.

Project Homepage

MV5

A Simulator for Asymmetric, SIMD, Multithreaded, and Multicore Architectures

Kevin Skadron

MV5 builds on Michigan's M5 simulator and adds support for asymmetric cores on a point-to-point interconnect with various topologies, directory-based cache coherence, SIMD cores, true multithreaded support, and novel thread scheduling. MV5 is currently in beta release with limited support, and is available under a BSD-like license.

Project Homepage

N-Variant Systems Framework

Software System Protection Framework

John Knight, David Evans, Jack Davidson, Anh Nguyen-Tuong and Jonathan Rowanhill

We propose a new approach to software service protection that is based on software system structures which combine monitoring software and tailored program diversity. The resulting systems have two valuable properties that cannot be achieved with previous vulnerability-masking approaches: (1) their effectiveness does not depend on keeping secrets from adversaries; and (2) we can construct proofs that a system cannot be compromised by a class of attacks, no matter what vulnerabilities exist in the server program. The first property means that adversaries can have complete knowledge about the structure and software of our systems without compromising their security. Thus, insider snooping cannot defeat our vulnerability protection against outsider initiated attacks, and probing or guessing attacks that have been shown effective against previously proposed diversity techniques pose no threat to our system. The second property means that there can be a high level of assurance in the coverage of vulnerabilities in the system based on formal arguments and depend only on clearly stated assumptions about components of our system structure, but place no constraints on properties of the protected software service. An instantiation of our idea is the N-Variant System Framework, which provides a general mechanism for detecting and preventing classes of attacks on vulnerable servers. Our approach provides provable security that does not rely on secrets.

Project Homepage

Nancy's Pantry

Bringing Printed Materials to the Visually Impaired

Paul Reynolds

Consider what it must be like to order a meal if you can't read the menu. Or to learn the amount of your latest utility bill. Or to select a can of your favorite soup off the shelf in your pantry. If you're visually impaired, there's no affordable technology in the marketplace to support you. However, the enabling technology does exist! Project Nancy's Pantry is about bringing low cost access to common, everyday printed materials: menus, soup cans, utility bills, bus schedules, clothing -- the list really is nearly endless. A key Requirement is low-cost production and low-cost use. We're exploring a two-part technology. Our production technology of choice is a 2D (two dimensional) barcode. Currently we're using pdf417 bar code, and the web technology XML. Using a laser or optical scanner similar to the checkout at most retailers, we can scan the barcodes and convert them to a computer processable form. Our user technology of choice is a low cost flexible processor, such as a PDA. Any device capable of general purpose processing, and generation of audio, is sufficient. Ultimately we envision a single PDA-like device that incorporates either laser or optical scanning, and general purpose processing. Such devices exist in a limited part of the commercial sector. We aim to bring them into the consumer sector.

Project Homepage

NEST

Networked Embedded Systems Technology for Wireless Sensor Networks

Jack Stankovic, and Sang Son

Wireless sensor networks have emerged as a new information gathering paradigm based on the decentralized collaboration of a large number of sensing/computation/actuation nodes. This project develops middleware and programming paradigms for wireless sensor networks. We have key innovations in real-time routing, packet aggregation, localization, asynchronous multicast, power management, group management and programming paradigms. Work is continuing in all these areas as well as on new topics including wireless sensor network event services, swarm computing, extreme scaling and robustness. Our work utilizes both simulation and a physical testbed of 120 Berkeley motes. This project is funded by DARPA, a MURI and NSF.

Project Homepage

NVM

Non-Volatile Memory Hierarchies

Sudhanva Gurmurthi

SRAM, DRAM and rotating magnetic disks have served as the bedrock technologies for designing processor caches, main memory, and storage for many years. However, continued use of these technologies pose several challenges with regard to performance, power, density, and scalability. One way to address these challenges is to use Non-Volatile Memory in lieu of these traditional memory and storage media. Examples of Non-Volatile Memory include Flash, Spin Transfer Torque RAM (STT-RAM), and Phase Change Memory (PCM). While these Non-Volatile Memory offer several benefits such as high density and very low leakage power, they also pose several challenges such as the need for high write currents, slow access times (compared to SRAM and DRAM), and limited endurance. This project aims to develop a high performance and extremely low-power memory hierarchy for a server, from processor caches down to storage, using Non-Volatile Memory. Additionally, this project also explores how the non-volatility property of these memories can be exploited in interesting ways at the architecture and system levels to enhance performance, energy-efficiency, and dependability. A few contributions of this project to date include relaxed volatility STT-RAM cache designs, Flash endurance enhancement techniques, and the development of architecture design tools for STT-RAM and Flash.

Project Homepage

Perracotta

Program Dynamic Temporal Analysis

David Evans

Perracotta is a dynamic analysis tool for automatically inferring temporal properties of a target program. It takes the program's execution traces as input and outputs a set of temporal properties it likely has. We address several key challenges to make our dynamic analysis effective. First, we propose a hierarchy of templates to capture commonly occurred properties. Perracotta uses a scalable statistical inference algorithm that can handle imperfect traces, several heuristics for selecting interesting temporal properties, and a chaining method for constructing larger finite state machines out of a group of small ones. To evaluate whether such property patterns are useful, we apply Perracotta to help program evolution by inferring temporal properties for several versions of a program, then compare the inferred properties across versions. We also use Perracotta to support program verification. We applied Perracotta to infer temporal rules for the Microsoft Windows kernel APIs, and found numerous previously undetected bugs in Windows.

Project Homepage

Perspective Warp

Improved Two-Pass Resampling Algorithms

Jason Lawrence

From classical texture mapping to photo-mosaics, warping images according to a perspective projection is a key operation in many computer graphics algorithms. Working with Microsoft's Graphics Device Interface (GDI+) group, we implemented Catmull and Smith's two-pass resampling algorithm that computes perspective projections of images in scanline order. Although they present an elegant technique for decomposing a general perspective transformation into two simplified 1d reconstruction passes, their approach is limited by how the algorithm selects the order of these passes. We better quantified the types of errors present in this decomposition, and demonstrated that in many important cases, using our new selection criteria improves the output image quality.

Project Homepage

Physicrypt

Physical Cryptography and Security

David Evans

Computing is moving into the real world. Instead of performing computations in protected beige boxes with narrow interfaces to the real world, the majority of future computing will be done by devices (such as nodes in sensor networks) that interact with the real world in rich and dynamic ways. Our research explores how this trend impacts security. We consider ways to use properties of the physical world in which computing devices are deployed to improve security. A related project, Swarm Computing, considers how the new computing environment impacts programming, and what biology can reach us about security. We have explored topics such as .NET security and the lessons learned and missed from Java, mobile sensor networks, using directional antennas to prevent wormhole attacks, the perception and reality of election security, authentication for remote voting, secure aggregation for wireless networks, and security issues and requirements for Internet-scale publish-subscribe systems.

Project Homepage

PRMES

Profit and Resource Management for Embedded Systems

Mary Lou Soffa

Continuous Compilation has a number of benefits and uses for embedded software. It is particularly well suited for enabling software malleability, meeting multiple design constraints that change at run-time, and managing system resources. In this sub-project, we are developing novel solutions to these problems and applying our work with Continuous Compilation to embedded software. We are developing novel analytic models of machine resources, code optimizations, and program code that can be used to better meet performance/cost goals for embedded systems. The models can be used to quickly explore different optimization decisions (e.g., optimization configuration, interactions, etc.) to select the optimizations that best meet an objective function. Unlike other work, such as iterative compilation and experimentally based searches, our approach does not require execution feedback to evaluate whether an optimization is beneficial. Hence, the approach is extremely fast and can determine the benefit of an optimization much quicker than existing approaches. Dynamic code translation can be used to effectively manage embedded system resources. We have developed techniques for instruction code compression/decompression in a purely software based framework. Such an approach is more flexible than a hardware based solution because it allows the compressor and decompressor to be changed. Unlike other software approaches, our technique decompresses only instructions that are very likely to execute, which helps to reduce the run-time overhead of decompression. We have developed a technique, called partial image compression, that substantially reduces code image size using only a modest run-time decompression overhead.

Project Homepage

Processor Reliability

Microprocessor Reliability

Sudhanva Gurumurthi

Technology scaling, which has paved the way for multicore processors, also gives rise to a variety of silicon reliability problems. These problems include particle-induced soft errors and lifetime reliability phenomena, such as Negative Bias Temperature Instability (NBTI) and process variations. These phenomena threaten to break the abstraction that architecture has traditionally provided to the higher layers of the system as a reliable computing substrate. Existing techniques that combat these reliability problems entail significant performance and power overheads. It is imperative to reduce these overheads to effectively harness the computing power of future multicore processors. However, reducing these overheads without seriously compromising the required level of fault protection is challenging. The goal of this project is to develop fault tolerance techniques that provide protection against various silicon reliability phenomena while imposing significantly less performance and power overheads than traditional reliability techniques. A few contributions of this project include Partial Redundant Multi-Threading mechanisms, runtime AVF prediction, Recovery Boosting to provide deep rejuvenation of SRAM cells from NBTI stresses, and combined circuit and microarchitectural techniques to mitigate NBTI on processor functional units.

Project Homepage

Relational Keyword Search

Evaluating Relational Keyword Search Systems

Joel Coffman and Alf Weaver

Our work challenges the ad hoc evaluations reported in the literature, evaluations that lag far behind the best practices developed in information retrieval (IR) for evaluating retrieval systems. Standardized evaluation will enable the same rapid progress in this field that the IR community experienced following the inception of the Text REtrieval Conference (TREC). Our work requires the construction of real-world workloads to benchmark the efficiency and effectiveness of existing systems. The evaluation framework will then guide the development of novel techniques for ranking results and query processing.

Project Homepage

Rodinia

A Benchmark Suite for Heterogeneous Computing

Kevin Skadron

With the microprocessor industry's shift to multicore architectures, research in parallel computing is essential to ensure future progress in mainstream computer systems including GPUs. This in turn requires standard benchmark programs that researchers can use to compare platforms, identify performance bottlenecks, and evaluate potential solutions. Several current benchmark suites provide parallel programs, but only for conventional, general-purpose CPU architectures. Existing benchmark suites neither support these accelerators' APIs nor represent the kinds of applications and parallelism that are likely to drive development of such accelerators. Understanding their architectural strengths and weaknesses is important for computer systems researchers as well as for programmers, who will gain insight into the most effective data structures and algorithms for each platform. Hardware and compiler innovation for accelerators and for heterogeneous system design may be just as commercially and socially beneficial as for conventional CPUs. Inhibiting such innovation, however, is the lack of a benchmark suite providing a diverse set of applications for heterogeneous systems. The Rodinia benchmark suite, a set of applications with different characteristics, is designed to address these concerns. The suite is structured to span a range of parallelism and data-sharing characteristics. Each application or kernel is carefully chosen to represent different types of behavior. Rodinia so far targets GPUs and multicore CPUs as a starting point. A few FPGA implementations are also available.

Project Homepage

Secure Mobile Computing

Secure Mobile Computing

Alf Weaver, Ben Calhoun and Travis Blalock

With ordinary mobile devices, a user authenticates in some standard fashion (e.g., password, fingerprint) and gains access to the devices programs and data. If the data is sensitive (e.g., medical data, intelligence data), there is a risk to data privacy and security if the user is incapacitated, or if the user loses the device, or if the device is stolen. Our project improves security by monitoring a biometric signal (initially a heart beat). Usage continues normally as long as the biometric signal is reliably received. Data protection policies (set by system administrators) dictate what happens in abnormal situations. The device can be forced into a locked state in which user data is encrypted and the data cannot be decrypted until the user has completed a successful re-authentication. Alternatively, the device can be forced into a safe state in which all sensitive data is erased

Project Homepage

Semantic Streams

A Framework for Composable Semantic Interpretation of Sensor Data

Kamin Whitehouse

One of the largest remaining obstacles to the widespread use of sensor networks is the interpretation of the sensor data; users must be able to query for high-level facts about the world, not just raw ADC readings. Our Semantic Streams framework allows users to pose declarative queries for semantic values, such as "the speeds of vehicles near the entrance of the parking garage." The system combines logical inference with AI planning techniques to compose a sequence of inference units, which are stream operators that perform a minimal amount of sensor fusion or interpretation on incoming event streams and incorporate newly inferred semantic values into outgoing event streams. Both the sensor network and the inference units are logically described in Prolog and the system can reason about which sensors to use and whether the inference units are being used in an appropriate world context. Typically, multiple combinations of sensors and inference units can answer the same query, so users can also declare QoS constraints in CLP(R) notation to choose between logically equivalent inference graphs, e.g., "the confidence of the speed estimates should be greater than 90%" or "I want to minimize the total number of radio messages".

Project Homepage

Software Quality

Reliability and Error-Prevention

Westley Weimer

Our research interests lie in advancing software quality by using both static and dynamic programming language approaches. We are particularly concerned with automatic or minimally-guided techniques that can scale and be applied easily to large, existing programs. We believe that finding bugs is insufficient, and we also work to help programmers address defects and understand error reports. We are also interested in designing languages and language features to help prevent errors. We use concepts from other areas of computer science to help address software quality problems, and succeeded in combining elements of databases, systems, and machine learning. We add programming language support for more robust error-handling via first-class compensation stacks. It also uses specification mining techniques to automatically infer important resources and safety policies in the presence of run-time errors.

Project Homepage

STILT

System for Terrorism Intervention and Large-scale Teamwork

John Knight, David Evans and Jack Davidson

In the event of an attack or disaster, the effectiveness of communication will have great impact on effectiveness of response. Information, such as what areas have been affected and the level of damage, must be quickly distributed to the people and systems who will then use it to formulate effective response. All individuals that will enact this response must then be notified quickly that the event has occurred and directed as to the action (or inaction) they must take. Feedback is then collected from the responders to assess the implementation of response. This type of control loop is currently difficult to enact even within a single jurisdiction, and nearly infeasible at the regional or national scale. The intent of our System for Terrorism Intervention and Large-scale Teamwork (STILT) is to provide a mechanism for enacting such a control loop. The system is primarily focused around detecting and responding to geographically-distributed, coordinated terrorist attacks. However, the system is general enough that it can be applied to any emergency response situation. Our central premise is that a system that can correlate distributed attacks and precisely target response commands and information to relevant parties will help prevent subsequent attacks, decrease the impact of successful subsequent attacks, and increase response effectiveness.

Project Homepage

Surface Deformations

A Painting Interface for Geometric Modeling

Jason Lawrence

Designing effective user interfaces for modeling 3d geometry has been a longstanding challenge in computer graphics. This task is complicated by the fact that we often attempt to specify 3d positions and directions with purely 2d input devices. To address this challenge, we developed a modeling interface that combines interactive manipulation and physical simulation. Instead of modifying the surface directly, the user specifies physical properties of the surface (i.e velocity) using a 3d painting interface. The user then interactively simulates the resulting surface motion until the desired deformation is achieved. Our work enabled the creation of several models using this interface, and was honored with a best paper award.

Project Homepage

Tortola

Hardware-Software Symbiosis

Kim Hazelwood

Modern computer systems designers must consider many more factors than just raw performance. Thermal output, power consumption, reliability, testing, and security are quickly becoming first-order concerns. Until recently, the vast majority of research efforts in optimizing computer systems have targeted a single logical "layer" in isolation: application code, operating systems, virtual machines, microarchitecture, or circuits. However, we feel that we are reaching the limits of the solutions than we can provide by targeting a single layer in isolation. We also feel that there is an important class of computing challenges that is better suited for more holistic approaches. The Tortola project is exploring of a symbiotic relationship between a virtual machine and the host microarchitecture to solve future challenges in the areas of power, reliability, security, and performance. In general, we attack challenges that would work well using more "reactive" techniques, whereby the hardware can detect a problem, and the virtual machine can use its global knowledge about the program to correct the problem.

Project Homepage

VCGR

Virginia Center for Grid Research

Andrew Grimshaw

The Virginia Center for Grid Research is dedicated to performing research and solving issues surrounding the operation, deployment, and use of large distributed data and computing systems. Our overriding objective is to advance the science and application of grid computing so that it is more useful and readily available to those end users that can benefit from its power. Our goal is not to simply solve a few pieces of the overall grid computing puzzle (which is an important, part to be sure), but to also promote the use of grid computing systems to improve the capabilities of other areas of science and to perform research and share information and ideas. The scope of our charter covers a wide spectrum of activities including pure computer science/engineering research, grid system development and deployment, grid research community interaction, development of standards, and end user collaboration and outreach. We have created the Global Bio Grid (GBG) to facilitate several research efforts in the biological and medical fields at various institutions. We plan to identify and solve key research issues; demonstrate, evangelize, and educate user communities on the usefulness of grid systems; improve the ease of use of systems and work with end users and solicit their feedback; and increase the reach of grid systems.

Project Homepage

VEST

Virginia Embedded Systems Toolkit

Jack Stankovic

VEST (Virginia Embedded Systems Toolkit) is an integrated environment for constructing and analyzing component based embedded and real-time systems. Generally speaking, the VEST environment is designed to help designers select or create passive components (collection of functions, classes, HW, etc), compose them into a system/product, map the passive components onto runtime (active) structures such as threads, map threads onto specific hardware, and perform dependency checks and non-functional analyzes to offer as many guarantees as possible along many dimensions including real-time performance, and reliability.

Project Homepage

VLSI CAD

Physical Design and Layout

James Cohoon and Gabriel Robins

Recent trends in deep-submicron Very Large Scale Integrated (VLSI) circuit technology have resulted in new requirements for circuit layout algorithms. Much of our research centers on new formulations which capture performance and density criteria in computer-aided design (CAD). Our results include near-optimal approximation algorithms for computationally intractable problems as minimum-cost Steiner tree routing, low-skew clock routing, cost-radius tradeoffs, bounded-density trees, circuit probe testing, Elmore-based routing constructions, multi-port terminal routing, and new metal-fill formulations for manufacturing yield enhancement. Our methods address not only traditional and leading-edge integrated circuit technologies, but also newer design styles such as field-programmable gate arrays (FPGAs) and multi-chip modules. We are also investigating other topics in discrete algorithms, combinatorial optimization, and computational geometry.

Project Homepage

Willow

Survivability Architecture for Information Systems

John Knight

The Willow system is designed to support the survivability of large distributed information systems. As part of its approach, Willow deals broadly with their faults, applying fault avoidance by disabling vulnerable network elements intentionally when a threat is detected or predicted, fault elimination by replacing system software elements when faults are discovered, and fault tolerance by reconfiguring the system if non-maskable damage occurs. The reactive component of Willow supplements the usual information system fabric with a comprehensive fault-tolerance mechanism referred to as a survivability architecture or information survivability control system. The key to the architecture is a powerful reconfiguration mechanism that is combined with a general control loop structure in which network state is sensed, analyzed, and required changes effected. The main challenges Willow overcomes are those of scalability and complexity.

Project Homepage

WSN

Wireless Sensors Networks

Jack Stankovic and Sang Son

It is now possible to develop large numbers of small smart components that combine computing power, wireless communication capabilities, and specialized sensors and actuators. These components or nodes may be deployed in thousands to achieve a common mission. They may be used to monitor poorly accessible or dangerous environments such as the ocean floor, neighborhoods of volcanic activities, hostile territories (e.g., behind enemy lines), disaster areas, and nuclearly active fields. They may also be deployed to accomplish interactive tasks such as finding and detonating enemy mines, looking for survivors of natural disasters, or containing and isolating oil spills to protect a nearby coastline. These wireless sensor devices are also useful for environmental monitoring, medical applications and smart homes or buildings. Our work includes developing MAC and routing layer solutions, group management based on novel "enough" semantics, analysis and implementation techniques for achieving aggregate behavior, novel data services protocols including sensor net querying capabilities, development of a sophisticated event service for WSN, protocols for power management, protocols for computer security, and developing new paradigms for sensor net programming. We are working with a testbed of MICA and XSM motes to investigate detection, tracking and classification with power management capabilities.

Project Homepage

WSNM

Wireless Sensor Networks for Medicine

Jack Stankovic and Sang Son

Research hospitals are already regarded as using the latest in medical technology and techniques. Technology developed there will gradually be adopted by other hospitals, clinics, nursing homes, assisted-living facilities, independent-living communities, and other healthcare providers. The future will see the integration of the abundance of existing specialized medical technology with pervasive, wireless networks. They will co-exist with the installed infrastructure, augmenting data collection and real-time response. We are developing an architecture for smart healthcare that will open up new opportunities for continuous monitoring of assisted- and independent-living residents. While preserving resident comfort and privacy, the network manages a continuous medical history. Unobtrusive area and environmental sensors combine with wearable interactive devices to evaluate the health of spaces and the people who inhabit them. Authorized care providers may monitor resident health and activity patterns, such as circadian rhythm changes, which may signify changes in healthcare needs. High costs of installation and retrofitting are avoided by using ad hoc, self-managing networks.

Project Homepage

Zeus

Practical Formal Techniques for Software Development

John Knight

For many years, academics have claimed that the use of formal methods in software development would help industry meet its goals of producing better software more efficiently. Despite their popularity in academia, formal methods are still not widely used by commercial software companies. The goals of the Zeus project are to analyze the fundamental basis for the disparity between research and industry, and to develop techniques that enable industrial software development to benefit from formal methods. The project includes a comprehensive analysis of the operational process requirements, a detailed cost-benefit model, appropriate tools and techniques, and case studies that address the limitations found previously when trying to integrate formal techniques and that also illustrate the associated benefits and risks.

Project Homepage

Past Project(s)


BeeHive

Global Multimedia Database Support for Dependable Real-Time Application

Jack Stankovic, Sang Son and Jörg Liebeherr

The influence of computers, communications and databases is quickly creating a global virtual database where many applications require real-time access to both temporally accurate and multimedia data. We are developing a global virtual database, called BeeHive, which is enterprise-specific and offers features along real-time, fault tolerance, quality of service for audio and video, and security dimensions. Support of all these features and trade offs between them will provide significant improvement in performance and functionality over browsers, browsers connected to databases, and, in general, today's distributed databases. Such results should be applicable to sophisticated real-time control applications as well as the next generation Internet. In this project, we have developed various novel component technologies that are to be incorporated into BeeHive, especially with respect to real-time concurrency control, transaction scheduling, security trade offs, resource models, and associated admission control and scheduling algorithms. We have also developed and implemented a cogency monitor which is the interface from BeeHive (an enterprise system) into the open Internet.

Project Homepage

Clairvoyant

Wireless Embedded Source-level Debugging

Kamin Whitehouse and Mary Lou Soffa

Clairvoyant is the first comprehensive source-level debugger for WSNs. It provides standard debugging commands, including break, step, watch, and backtrace as well as new, special-purpose commands that deal with interrupts, conditional breakpoints, and new global commands such as gstop and gcontinue.

Project Homepage

Computer Vision

Robotics, planning, and Artificial Intelligence

Worthy Martin

We do theoretical and experimental research on computer vision and image processing. Our goal is to develop real-time vision architectures, including applications for visually guided mobile robots. Our research agenda includes motion understanding, visual tracking of moving objects, and perception-action systems. We have several robots, a large set of image processing equipment, and an SGI reality engine at our disposal. We are interested in a variety of aspects of computer vision and image processing, with an emphasis on real-time vision applications. Topics of current interest include software for advanced vision systems, architectures and representational structures for visually guided mobile robots, systems to facilitate the use of pipelined image processors, visual tracking of moving objects, and motion understanding.

Project Homepage

GPGPU

General Purpose Computing on the GPU

Kevin Skadron

UVA's GPGPU work started many years ago when we looked at the application of graphics hardware to general-purpose numeric computing. Specifically, we used programmable graphics hardware to create a system that is capable of solving a variety of partial differential equations with complex boundary conditions. Our system implements the multigrid method, a fast and popular approach to solving large boundary value problems. We have demonstrated the viability of this technique by using it to accelerate three applications: simulation of heat transfer, modeling of fluid mechanics, and tone mapping of high dynamic range images. We also demonstrated that it is possible to cleanly map a state-of-the-art tone mapping algorithm to the pixel processor. This allows an interactive application to achieve higher levels of realism by rendering with physically based, unclamped lighting values and high dynamic range texture maps. Subsequent work, led by Kevin Skadron, has evolved into the Rodinia project.

Project Homepage

HoLSt

Hierarchical Loadable Schedulers

Jack Stankovic

Audio and video applications, process control, agile manufacturing and even defense systems are using commodity hardware and operating systems to run combinations of real-time and non-real-time tasks. We are developing an architecture that will allow a general-purpose operating system to schedule conventional and real-time tasks with diverse requirements, to provide flexible load isolation between applications, users, and accounting domains, and to enforce high-level policies about the allocation of CPU time. This is accomplished by implementing a dynamic, hierarchical scheduling infrastructure. The infrastructure is integrated with a resource manager that provides a level of indirection between resource requests and the scheduling hierarchy. A scheduling infrastructure separates scheduler code from the rest of the operating system.

Project Homepage

ITIC

Internet Technology Innovation Center

Alfred Weaver

Virginia's Center for Innovative Technology established the Internet Technology Innovation Center (Internet TIC) to assist the information technology industry and its growing base of Internet-related businesses. Our mission is to: (1) nurture the entrepreneurial environment for IT and Internet-related businesses; (2) accelerate the creation and deployment of network-based information technology; (3) develop the hardware and software infrastructure needed for a knowledge-based economy; and (4) expand Virginia's high-skill workforce in information technology. The Center is a unique partnership among the University of Virginia, Virginia Tech , George Mason University, and Christopher Newport University. Its faculty, staff, and students perform R&D in the areas of electronic commerce, e-business software, digital libraries, knowledge management, Internet multimedia, high-speed access, Internet2, fiber optics, wireless communications, and mobile and portable radios. In our first two years of operation we assisted 200 companies and conducted $8 million of funded R&D projects.

Project Homepage

JDelta

Testbed for Delta Compression Algorithms in Diverse Environments

Alfred Weaver

Delta compression is the process of comparing two files to produce a set of instructions that will convert one file into the other. Storing or transmitting a delta file rather than the entire new file can offer significant efficiency gains. However, the different aspects of delta compression efficiency, like many problems in computer science, can rest at different ends of a balance. There does not exist a single delta compression algorithm that performs the best in every environment because performance requirements may vary. In order for a user of delta compression to choose the best algorithm for his situation, he must have some understanding of the strengths and weaknesses of the various delta compression algorithms available to him. JDelta, a Java-based delta compression utility, has been designed and implemented to both aid in this understanding, and to provide a framework for the future development of new delta compression techniques.

Project Homepage

MaSTRI

Modeling and Simulation Technology Research Initiative

Paul Reynolds

The focus of the MaSTRI Project is the solution of critical challenges that have inhibited or prevented the use of modeling and simulation technology in otherwise practical settings. Critical challenges include simulation reuse, multi-resolution modeling, composability, interoperability, visualization, behavioral modeling and integration of modeling and simulation (M&S) into training and education. Our research is focused on the areas of simulation coercion and simulation coercibility, which we collectively refer to as COERCE. We observe that COERCE has direct application to the challenges of simulation reuse and composability. COERCE can minimize problems caused by differences between models of the same phenomenon at different levels of resolution. In the area of simulation composability, COERCE has the potential to increase flexibility of the components comprising a simulation. So far, we have experienced considerable success in coercing individual simulations that were not designed to be coerced, and we are exploring how simulation coercion can become more automated and be facilitated by developing simulations with the specific objective of coercibility.

Project Homepage

Medical Portal

Advancing Cyber Security with .NET

Alfred Weaver

The rapid worldwide deployment of the Internet and Web is the enabler of a new generation of e-healthcare applications, but the provision of a security architecture that can ensure the privacy and security of sensitive healthcare data is still an open question. We are using a web services approach to solve this problem. We authenticate users using a variety of biometric (fingerprint, iris scan, signature recognition) and digital (e-token, RFID, PIN generators) approaches; we generate custom authentication tokens that provide not only identification information but also a qualitative measure of the trustworthiness of the identification technology. We built an authorization rule engine that dynamically enforces data access rights. We use strong encryption on all data transmissions (e.g., electronic prescriptions). Our current research challenge is in devising ways to share trust across organizational boundaries; we are currently experimenting with trust groups, attribute servers, and token exchange servers are possible solutions.

Project Homepage

MRRL

Memory Reference Reuse Latency

Kevin Skadron

Memory Reference Reuse Latency (MRRL) is the distance (in completed instructions) between consecutive references to some memory location. By measuring the reuse latencies of each unique address accessed by a benchmark, we are able to select a point to begin cache and branch predictor warm up prior to each simulation sample cluster. Cache and branch predictor warm up assures accurate simulation; our delayed warm up technique achieves accurate simulation in less time than modeling all cache and branch predictor interactions prior to each sample cluster. MRRL works very well for sampled simulations, because MRRL-driven warm up eliminates nonsampling bias just as well as fullwarmup which models all pre-cluster instructions. MRRL exploits temporal locality by modeling only those interactions that occur nearest to the clusters, that are most likely to be relevant to the simulation of the clusters, which gives MRRL its speed advantage.

Project Homepage

Naccio

Policy-Directed Code Safety

David Evans

The goal of the Naccio Project is to develop a general architecture for defining and enforcing code safety policies. We built tools that take untrusted programs and specification files describing the execution platform and desired safety policy, and produce a new program that behaves like the original program but is guaranteed to satisfy the safety policy.

Project Homepage

Qsilver

Flexible Simulation Framework for Graphics Architectures

Kevin Skadron

QSilver is a multipurpose tool for analysis of the performance characteristics of computer graphics hardware and software. Qsilver is a highly configurable micro-architectural simulator of the GPU that uses the Chromium system's ability to intercept and redirect an OpenGL stream. The simulator produces an annotated trace of graphics commands using Chromium, then runs the trace through a cycle-timer model to evaluate time-dependent behaviors of the various functional units. Qsilver can be used to analyze performance bottlenecks in existing architecture, to explore new GPU microarchitectures, and to model power and leakage properties. One innovation of this tool is the use of dynamic voltage scaling across multiple clock domains to achieve significant energy savings at almost negligible performance cost.

Project Homepage

RoboCup

Simulated soccer

David Evans

The Wahoo Wunderkind Football Club investigates the application of swarm programming to simulated soccer. Our research takes part in the simulation league of the Robot World Cup Initiative (RoboCup), an international joint project to promote AI, robotics, and related field. It is an attempt to foster AI and intelligent robotics research by providing a standard problem where wide range of technologies can be integrated and examined. RoboCup chose to use soccer game as a central topic of research, aiming at innovations to be applied for socially significant problems and industries. The ultimate goal of the RoboCup project is By 2050, develop a team of fully autonomous humanoid robots that can win against the human world champion team in soccer. In order for a robot team to actually perform a soccer game, various technologies must be incorporated including: design principles of autonomous agents, multi-agent collaboration, strategy acquisition, real-time reasoning, robotics, and sensor-fusion. RoboCup is a task for a team of multiple fast-moving robots under a dynamic environment. RoboCup also offers a software platform for research on the software aspects of RoboCup. One of the major application of RoboCup technologies is a search and rescue in large scale disasters.

Project Homepage

Software Testing:

Exploiting Hardware Mechanisms and Multicore Technology

Mary Lou Soffa

Multicore technology and many hardware monitoring mechanisms have become ubiquitous in commodity machines in recent years. These hardware advances have been exploited for various software engineering tasks such as path profiling and dynamic code optimization, but they have not been leveraged in structural and data-flow testing. The overhead of software testing is dominated by the costs of monitoring for events and analysis of the events. Monitoring is generally performed by adding code instrumentation, which can prohibitively increase the execution time and code size of the program. In branch testing, for example, the execution time on average increases 10%-30% while the code size grows 60%-90%. This research is exploring the potential of exploiting hardware advances to more efficiently perform structural and data-flow testing while maintaining a high level of effectiveness. Ph.D. student: Kristen Walcott

Project Homepage

Splint

Annotation-Assisted Lightweight Static Checking

David Evans

Splint is a tool for statically checking C programs using information provided in source-code annotations. Annotations are stylized comments that document assumptions about functions, variables, parameters, types and control flow. Splint exploits these annotations to perform aggressive checking and detect a large class of security vulnerabilities and common programming errors including buffer overflows, memory management errors including uses of dead storage and memory leaks, dangerous data sharing, and undefined program behavior. Current work is exploring ways to support user-defined annotations and automate the annotation process.

Project Homepage

Swarm Computing

Biologically-Inspired Programming

David Evans

Computing is rapidly moving away from traditional computers. Programs in the future will run on collections of mobile processors that interact with the physical world and communicate over ad hoc networks. We can view such collections as swarms. As with natural swarms, such as a beehive or ant colony, the behavior of a computational swarm emerges from the behaviors of its individual members. Our research focuses on developing methods for creating, understanding and validating properties of programs that execute on swarms of computing devices. We are striving to create and reason about swarm programs in principled ways. A promising approach is to construct swarm programs by combining primitives. The functional and non-functional behavior of a primitive is described using formal notations. We are investigating techniques based on both experimental and analytical approaches for predicting the functional and non-functional properties of compositions of swarm primitives. Although the practical applications of swarm computing are in their infancy, there is great potential for useful applications. Successful swarm programming will depend on our ability to reason about swarm programs and construct device programs based on high-level goals. Our research seeks the first steps towards that target.

Project Homepage

Zephyr

National Compiler Infrastructure

Jack Davidson

The Zephyr philosophy is building compilers from parts that may include front ends, back ends, optimizers, and the glue that holds all these pieces together. Parts can even be generate automatically from compact specifications. Once an intermediate form is described using Zephyr's Abstract Syntax Description Language (ASDL), Zephyr can generate data-structure definitions in C, C++, Java, Standard ML, and Haskell. The IR can be serialized on disk and freely exchanged among compiler passes written in these languages. Zephyr separates the intermediate code from the instruction set, so we can develop machine descriptions that can be reused, not just with compilers, but also with other tools. The idea is to generate the machine-dependent parts from descriptions of instructions' semantics, of binary representations, or of other properties. Zephyr's Computer Systems Description Languages (CSDL) allows the user to describe as much or as little as needed for their application. Zephyr's very portable optimizer (vpo) provides instruction selection, instruction scheduling, and classical global optimization. Vpo operates on programs represented as register-transfer lists (RTLs) in SSA form. All vpo optimizations are ``plug and play,'' and it is easy to add new ones. Vpo provides a level playing field for evaluating different optimizations.

Project Homepage