Carnegie Mellon University
Advanced Search   
  Look in
       Title     Description
       Inactive Projects
Current Projects, Sorted Alphabetically
Adaptive Introspection for Robust Long Duration Autonomy
Long duration autonomy for unmanned systems is difficult to achieve as current systems are design limited to anticipated exceptions and do not adapt to long-term changes in the environment. In addition, the challenge of designing experiments for long durations that provoke unanticipated exceptions is difficult. In this project we will enable long-term operation in unpredictable environments by developing an adaptive introspection and deployment approach and evaluating the ideas in an experimental setup that will provoke exceptions.
Adaptive Traffic Light Signalization
As part of the Traffic21 initiative at CMU, we are investigating the design and application of adaptive traffic signal control strategies for urban road networks.
Non-contact 3-D surgical instrument tracking for device testing and surgeon assessment.
Assistive Robots for Blind Travelers
As robotics technology evolves to a stage where co-robots, or robots that can work with humans, become a reality, we need to ensure that these co-robots are equally capable of interacting with humans with disabilities. This project addresses this challenge by exploring meaningful human-robot interaction (HRI) in the context of assistive robots for blind travelers.
Automated Reverse Engineering of Buildings
The goal of this project is to use data from 3D sensors to automatically reconstruct compact, accurate, and semantically rich models of building interiors.
Autonomous Driving Motion Planning
The goal of this project is to develop efficient, high-performance motion planning methodologies for highway and urban autonomous driving.
Autonomous Ground Vehicle Design
Pursuing high speed navigation of unrehearsed terrain in pursuit of the 2005 DARPA Grand Challenge
Autonomous Mobile Assembly (ACE)
The ACE project is concerned with autonomous mobile assembly.
Autonomous Navigation System (ANS)
The NREC is leading the development of perception and path planning within the Autonomous Navigation System program for the Future Combat System.
Autonomous Off-Road Driving
We are developing autonomous technology for off-road driving in wilderness environments Key developments include perception, planning and control capabilities. This is a joint development with Yamaha Motor Corporation and the CMU Field Robotics Center.
Autonomous Robotics Manipulation
Carnegie Mellon’s Autonomous Robotic Manipulation (ARM-S) team develops software that autonomously performs complex manipulation tasks.
Autonomous Vehicle Health Monitoring
As DoD autonomous vehicles begin to take on more-complex and longer-duration missions they will need to incorporate knowledge about the current state of their sensing, actuation, and computing capabilities into their mission and task planning.
Autonomous Vehicle Safety Verification
This project investigates safety verification of autonomous driving behaviors.
Autonomous Vineyard Canopy and Yield Estimation
The research project aims to design and demonstrate new sensor technologies for autonomously gathering crop and canopy size estimates from a vineyard -- expediently, precisely, accurately and at high-resolution -- with the goal to improve vineyard efficiency by enabling producers to measure and manage the principal components of grapevine production on an individual vine basis.
Biodegradable Electronics
We are developing implantable biodegradable electronic devices offer the potential to provide therapeutic functions for limited periods of time - weeks to months – degrading in register with the anticipated needs of the application and thus not requiring surgical removal. One application is a biodegradable radio frequency (RF) power generator connected to electrical stimulating electrodes to enhance bone regeneration.

We are developing implantable, wireless MEMs-based sensors for various applications, such as monitoring bone regeneration and left ventricular pressure, to provide timely feedback to clinicians to help make better decisions on timing of therapeutic interventions.

We have designed and built inkjet-based bioprinters to controllably deposit spatial patterns of various growth factors and other signaling molecules on and in biodegradable scaffold materials to guide tissue regeneration.

Blood-Plasma Based Bioplastics
We have developed a manufacturing process to convert donated blood plasma and platelets into inexpensive, off-the-shelf bioactive plastics to enhance and accelerate tissue healing. These materials contain nature’s own mix of growth factors in highly concentrated solid to semi-solid forms that controllably elute these factors as the bioplastics degrade. This technology is currently in human clinical trials.

Braille Tutor
Literacy has been shown to be a key factor in global development. For many visually impaired communities around the world, learning braille is the only means of literacy. Despite its significance and the accessibility it brings, learning to write braille still has a number of barriers. According to the World Health Organization, approximately 90% of visually impaired people worldwide live in developing communities. Despite the importance of literacy to employment, social well-being, and health, the literacy rate of this population is estimated to be very low. There are many different factors that contribute to illiteracy among people with vision impairments such as: difficulties using the traditional tool for writing braille (the slate and stylus) and the high cost of alternative braille writing tools.
Automates copper processing
Cell Tracking
We are developing fully-automated computer vision-based cell tracking algorithms and a system that automatically determines the spatiotemporal history of dense populations of cells over extended period of time.
To develop electric vehicles (EVs) that are as efficient and cost-effective as possible, we have taken a systems-level approach to design, prototyping, and analysis to produce formally-modeled active vehicle energy management.
The Chiara is a new, open source educational robot, developed at Carnegie Mellon University's Tekkotsu lab, that will be manufactured and sold by RoPro Design, Inc
Circuit Extraction from MEMS Layout
We are developing a MEMS extraction module which reads in the geometric description of the layout structure and reconstructs the corresponding schematic.
Cluster: Coordinated Robotics for Material Handling
Planetary robots which perform assembly tasks to prepare for human exploration must be able to operate in unmodeled environments and in unanticipated situations. We are working on a system of mobile robots that perform precise coordinated maneuvers for transporting assembly materials. We are also developing an interface that allows an operator to step in at various levels of autonomy, providing the system with both the efficiency of an autonomous system and the reliability of a human operator.
Cohn-Kanade AU-Coded Facial Expression Database
An AU-coded database of over 2000 video sequences of over 200 subjects displaying various facial expressions.
Comprehensive Automation for Specialty Crops (CASC)
CASC is a multi-institutional initiative led by Carnegie Mellon Robotics Institute to comprehensively address the needs of specialty agriculture focusing on apples and horticultural stock.
Computer Assisted Medical Instrument Navigation
We are developing a system to help clinicians to precisely navigate various catheters inside human hearts.
Context-based Recognition of Building Components
In this project, we are investigating ways to leverage spatial context for the recognition of core building components, such as walls, floors, ceilings, doors, and doorways for the purpose of modeling interiors using 3D sensor data.
Cooperative Robotic Watercraft
This project's vision is to have large numbers of very inexpensive airboats provide situational awareness and deliver critical emergency supplies to flood victims.
Coplanar Shadowgrams for Acquiring Visual Hulls of Intricate Objects
We present a practical approach to shape-from-silhouettes using a novel technique called coplanar shadowgram imaging that allows us to use dozens to even hundreds of views for visual hull reconstruction.
NREC designed and developed the Crusher vehicle to support the UPI program's rigorous field experimentation schedule.
We are developing a curriculum for the Introduction to Computer Science (CS1) course taught at two and four year colleges and for high school Computer Science courses.
CTA Robotics
This project adresses the problems of scene interpretation and path planning for mobile robot navigation in natural environment.
Depression Assessment
This project aims to compute quantitative behavioral measures related to depression severity from facial expression, body gestures, and vocal prosody in clinical interviews.
Detailed Wall Modeling in Cluttered Environments
The goal of this project is to develop methods to accurately model wall surfaces even when they are partially occluded and contain numerous openings, such as windows and doorways.
Distributed SensorWebs
The Sensor Web initiative develops and implements wireless technology for distributed sensing and actuation in horticultural enterprises.
DRC Tartan Rescue Team
During the Fukushima-Daiichi nuclear accident, robots weren’t able to inspect the facility, assess damage, and fix problems. DARPA wants to change this.
Dynamic Biped
We are developing a new series of bipedal walking robots that use passive-dynamic principles.
Dynamically-Stable Mobile Robots in Human Environments
We are developing novel dynamically-stable rolling machine and walking machine research platforms to study interactions with people and operating in normal home and workplace environments.
E57 Standard for 3D Imaging System Data Exchange
The goal of this project is to develop a vendor-neutral data exchange format for data produced by 3D imaging systems, such as laser scanners.
The Ember project uses multi-agent teams, comprised of autonomous and human agents, to achieve effective results under emergency situations.
Event Detection in Videos
Our event detection method can detect a wide range of actions in video by correlating spatio-temporal shapes to over-segmented videos without background subtraction.
Exploration of Planetary Skylights and Caves
The NREC is developing an untethered, long range (2,500 ft +), gas line visual inspection robot system that provides real-time video from inside the line, can be deployed in live lines, and can pass through all angles and bends of both 6" and 8" lines.
Extrinsic Dexterity
"Extrinsic Dexterity" is a way to get dexterous manipulation with a very simple hand, by coordinating finger motion with arm motion. The more common approach is to depend entirely on the fingers of the hand, which requires at least three fingers and at least nine motors. We have demonstrated Extrinsic Dexterity using the single motor of the MLab Hand, coordinated with the motions of the arm.
Face Recognition
Recognizing people from images and videos.
Facial Expression Analysis
Automatic facial expression encoding, extraction and recognition, and expression intensity estimation for the applications of MPEG4 application: teleconferencing, human-computer interaction/interface.
Facial Feature Detection
Detecting facial features in images.
Factory Automation
We are developing the next generation of mobile robots for operating in the factory environments. These mobile robots can localize without modifying the factory and navigate any path in the factory, with the ability to replan paths to avoid unexpected obstacles. These new capabilities will increase the throughput of the factories, as well as decrease the time required to deploy (and re-deploy) the robots into the factory.
Feature Selection
Feature selection in component analysis.
Feature-based 3D Head Tracking
A feature-based head tracking algorithm can handle occlusions and fast motion of face.
Fine Outreach for Science
The Fine Outreach for Science, sponsored by the Fine Foundation, provides GigaPan units to scientists and documents the evolution of GigaPan as a research tool.
We are developing videotactile fingertip sensors which will enable people to interact with the visible world via their fingertips.
Footstep Planning for Biped Robots
Navigation strategies for bipeds through complex environments, planning for the full capabilities of the biped.
Forecasting the Anterior Cruciate Ligament Rupture Patterns
Use of machine learning techniques to predict the injury pattern of the Anterior Cruciate Ligament (ACL) using non-invasive methods.
Formal Models of Human Control and Interaction with Cyber-Physical Systems
Cyber-Physical Systems (CPS) encompass a large variety of systems including example future energy systems (e.g. smart grid), homeland security and emergency response, smart medical technologies, smart cars and air transportation. The goal of this project is to develop cognitively-based analytic models of human operators so that they can be integrated with models of the physical/robotic system so that the whole mixed human-CPS system can be formally verified.
Formal Verification of Autonomous Systems
We are developing tools and techniques to support formal verification of autonomous systems.
Foundation for MEMS Synthesis (MEMSYN)
shorten MEMS development cycle
Free-Roaming Planar Motors
We are developing autonomous planar motors for precision positioning.
Frontal Face Alignment
This face alignment method detects generic frontal faces with large appearance variations and 2D pose changes and identifies detailed facial structures in images.

Generic Active Appearance Models
We are pursuing techniques for non-rigid face alignment based on Constrained Local Models (CLMs) that exhibit superior generic performance over conventional AAMs.
GigaPan is the newest development of the Global Connection Project, which aims to help us meet our neighbors across the globe, and learn about our planet itself.
The NREC-led team designed, developed and field tested and successfully demonstrated a Gladiator robotic system with high mobility and remote combat capabilities.
Google Lunar X Prize
We are part of a $30 million international competition to safely land a robot on the surface of the Moon, travel 500 meters over the lunar surface, and send images and data back to the Earth.
GPS-denied Localization using ground and air vehicles
In this project, we are developing mapping and localization methods that combine aerial imagery from satellite and aerial platforms with maps and perception from ground-based robots to produce integrated maps even when GPS is unavailable.
Hand Held Force Magnifier
We have developed a novel and relatively simple method for magnifying forces perceived by an operator using a tool. A sensor measures the force between the tip of a tool and its handle held by the operator’s fingers.
Harnessing Human Manipulation
A miniature mobile robot for minimally invasive therapy on the beating heart through a single percutaneous incision.
Helicopter Obstacle Avoidance and Landing
In this project we develop the trajectory planning system for an autonomous helicopter. The helicopter is used for cargo delivery. To read more about the trajectory planning system see the following publications.
High-Aspect-Ratio CMOS Micromachining Process
We have developed an integrated CMOS- MEMS process in which electrostatically actuated microstructures with high-aspect-ratio composite-beam suspensions are fabricated using conventional CMOS processing.
Highly-Articulated Robotic Probe (HARP)
We developed and tested a prototype based on an innovative approach of a highly articulated robotic probe.
Hot Flash Detection
Machine learning algorithms to detect hot flashes in women using physiological measures.
Human Control of Robotic Swarms
Robotic Swarms are distributed systems whose members interact via local control laws to achieve different behaviors. The goal of the project is to develop effective methods for human-swarm interaction and control considering realistic environment and system constraints.
Human-Robot Interaction
The human-robot interaction project explores aspects of social interaction between people and robots, in particular how robots should be designed to provide people with appropriate interactions.
Hydroponic Automation
We are developing inexpensive robotic approaches towards hydroponic growing, which can increase overall crop yield.
IMU-Assisted KLT Feature Tracker
The KLT (Kanade-Lucas-Tomasi) method seeks to increase the robustness of feature tracking utilities by tracking a set of feature points in an image sequence. Our goal is to enhance the KLT method to increase the number of feature points and their tracking length under real-time constraint.
In-Situ Image Guidance for Microsurgery
We have developed a new image-based guidance system for microsurgery using optical coherence tomography (OCT), which presents a continuously updated virtual image in its correct location inside the scanned tissue. OCT provides real-time, 6-micron resolution images at video rates within a 2-6 mm axial range in soft or transparent tissue, and is therefore suitable for guidance to various targets in the eye. Ophthalmologic applications in general are diverse within the realm of anterior-segment surgery, whether for medical treatment or for scientific experimentation. Surgical manipulations, especially of the cornea, limbus, and lens may eventually be aided or enabled, and as an example we are presently working to guide access to Schlemm’s canal for treating Glaucoma.
Indoor Flight in Degraded Visual Environments
Our goal is to fly indoors in degraded visual environments to localize people and fire. We are developing accurate real-time localization and control to be able to fly in these challenging conditions.
Indoor People Localization
Tracking multiple people in indoor environments with the connectivity of Bluetooth devices.
Informedia Digital Video Library
Informedia Digital Video Library - Informedia is pioneering new approaches for automated video and audio indexing, navigation, visualization, summarization search, and retrieval and embedding them in systems for use in education, health care, defense intelligence and understanding of human activity.
Integrated MEMS Inertial Measurement Unit (IMIMU)
Developing a monolithic inertial measurement unit that exploits integrated-microdevice CAD tools to achieve superior system performance over individual microdevices.
Intelligent Monitoring of Assembly Operations (IMAO)
Our goal is to allow people and intelligent and dexterous machines to work together safely as partners in assembly operations performed within industrial workcells. To ensure the safety of people working amidst active robotic devices, we use vision and 3D sensing technologies, such as stereo cameras and flash LIDAR, to detect and track people and other moving objects within the workcell.
Inter-Process Communication Package (IPC)
We are developing a high-level support package for connecting and sending data among processes using TCP / IP sockets.
iSTEP (innovative Student Technology ExPerience) is a unique internship program that provides Carnegie Mellon University students with the opportunity to conduct technology research projects in developing communities. Started in 2009 by the TechBridgeWorld research group, iSTEP is a rigorous and competitive 10-week internship program that requires the involvement of students with high levels of dedication, team work, cross-cultural adaptability, initiative and academic achievement.
Joystick Filtering for Movement Disorders
Filtering of joystick input for computer users with movement disorders
Knee Navigation Systems (KneeNav TKR/ACL) (KneeNav)
We are developing two CT-based surgical navigation systems for total knee replacement and ACL reconstructive surgery.
Learning Locomotion
Robust planning and control of the quadruped robot "Little Dog" to traverse rough terrain (DARPA sponsored).
Learning Optimal Representations
Learning optimal representations for classification, image alignment, visualization and clustering.
Lego Educational Robotics
Self-paced robotics education labs
Life in the Atacama
Robotic field investigation will bring new scientific understanding of the Atacama as a habitat for life with distinct analogies to Mars.
LNG Pipe Vision (LPV)
A pipe-crawling robot visually inspects pipes in liquid natural gas (LNG) plants for corrosion.
Low Dimensional Embeddings
Finding low dimensional embeddings of signals optimal for modeling, classification, visualization and clustering.
Low-Flying Air Vehicles
We leverage perception technology originally developed for ground-based robot vehicles during 20 years of research at the Field Robotics Center. We combine this proven perception and control technology with aircraft-centric engineering and optimization.
LSTAT/Snake Robot
We are working with the US Army's TATRC department (Telemedicine & Advanced Technology Research Center) to integrate a snake robot into the LSTAT system.
Lunar Ice Discovery Initiative (Icebreaker)
Icebreaker is a proposed mission to explore the south pole of the Moon.
Lunar Regolith Excavation and Transport
This research develops lightweight robotic excavators for digging and transporting regolith (loose soil) on the Moon.
Lunar Rover for Polar Crater Exploration (Scarab)
The Scarab lunar rover has been designed to carry a 1-meter coring drill and a payload of science instruments that can analyze the abundance of hydrogen, oxygen and other materials.
Micro Air Vehicle Scouts for Intelligent Semantic Mapping
The goal of this project is develop the next level of capability for a low-flying, map building MAV scout. The research will demonstrate rapid scouting in cluttered environments and acquire relevant semantically annotated maps.
Micron: Intelligent Microsurgical Instruments
Suppression of hand tremor to improve precision in microsurgery.
Modeling Cultural Factors in Collaboration and Negotiation (MURI 14)
This multi-university cooperation project concentrates on Modeling Cultural Factors in Collaboration and Negotiation The goal of this project is to conduct basic research to provide validated theories and techniques for descriptive and predictive models of dynamic collaboration and negotiation that consider cultural and social factors.
Modelling Synergies in Large Human-Machine Networked Systems (MURI 7)
This multi-university cooperation project concentrates on modeling synergies in large Human-Machine networked systems. The goals of this project are to achieve following: develop validated theories and techniques to predict behavior of large-scale, networked human-machine systems involving unmanned vehicles; model human decision making efficiency in such networked systems; and investigate the efficacy of adaptive automation to enhance human-system performance.
Modular Snake Robots
Monitoring of Coastal Ocean Processes
This project is attempting to elucidate the basic principles governing environmental field model synthesis based on the integration of adaptive robot sampling with human decision-making
The MORSE project is a simulated range operation, designed to evaluate effectiveness of the cognitive models and agents, in order to improve individual and team performance.
Multimodal Data Collection
A multimodal database of subjects performing the tasks involved in cooking, captured with several sensors (audio, video, motion capture, accelerometer/gyroscope).
Multimodal Diaries
Summarization of daily activity from multimodal data (audio, video, body sensors and computer monitoring)
Navigation Among Movable Obstacles (NAMO)
Autonomous motion planning and control for robots working in reconfigurable environments.
Safe and independent navigation of urban environments is a key feature of accessible cities. People who have physical challenges need practical, customizable, low-cost and easily-deployable mobility aids to help them safely navigate urban environments. Technology tools provide opportunities to empower people with disabilities to overcome some day-to-day challenges.
Needle Steering for Brain Surgery
We are developing high accuracy proportional steering of flexible needles for minimally invasive navigation in the brain.
Partial Order Scheduling Procedures
We are investigating the development, analysis and application of optimizing search procedures for generating plans and schedules that retain temporal flexibility
We are applying machine learning techniques to model and compute long-term and short-term trajectories of people in a variety of settings.
Planning for Manipulation
Developing algorithms for autonomous manipulation.
We are using video cameras to give vision to the ultrasound transducer. This could eventually lead to automated analysis of the ultrasound data within its anatomical context, as derived from an ultrasound probe with its own visual input about the patient’s exterior. We are exploring both probe-mounted cameras, as well as optically-tracked stand-alone cameras which could view a larger portion of the patient's exterior.
Project LISTEN's Reading Tutor
Project LISTEN's Reading Tutor listens to children read aloud.
Quality Assessment of As-built Building Information Models using Deviation Analysis
The goal of this project is to develop a method for conducting quality assessment (QA) of as-built building information models (BIMs) that utilizes patterns in the differences between the data within and between steps in the as-built BIM creation process to identify potential errors.
Real-Time Scheduling of ACCESS Paratransit Transportation
The goal of this project is to increase the effectiveness of paratransit service providers in managing daily operations through the development and deployment of dynamic, real-time scheduling technology.
RERC on Accessible Public Transportation
We are researching and developing methods to empower consumers and service providers in the design and evaluation of accessible transportation equipment, information services, and physical environments.
Riverine Mapping
This project is developing technology to map riverine environments from a low-flying rotorcraft. Challenges include dealing with varying appearance of the river and surrounding canopy, intermittent GPS and a highly constrained payload. We are developing self-supervised algorithms that can segment images from onboard cameras to determine the course of the river ahead, and we are developing devices and methods capable of mapping the shoreline.
In collaboration with the Drama Department, we are developing technology for long-term social interaction.
Robot Sensor Boat (RSB)
We present a fleet of autonomous Robot Sensor Boats (RSBs) developed for lake and river fresh water quality assessment and controlled by our Multilevel Autonomy Robot Telesupervision Architecture (MARTA).
Robotic Perception for Underground Rescue
Robots are potential tools for life saving in underground rescue operations like mine disasters. Human rescuers are thwarted by roof falls, explosion dangers, quality of air, visibility through smoke and dust, mental stress and physical endurance.
Robotic Soccer (RoboSoccer)
The RoboSoccer project develops collaboration among multiple autonomous agents.
Robust Autonomous Freeway Driving Behaviors
The goal of this project is to develop robust autonomous freeway driving behaviors that include: distance keeping handling entrance ramps; high-density traffic lane selection and merging; reasoning about sensor confidence, degradation, and failure; and accommodation of human-in-the-loop interaction.
Robust Detection of Highway Work Zones
This project is developing computer vision algorithms to detect and classify highway work zones.
Safety for UGVs
A flexible, behavior-based approach to safety lowers the risk of operating a large, fast-moving UGV.
Schematic Design for MEMS
We have developed nodal simulation software to enable a structured representation for MEMS design using a hierarchical set of MEM components.
Science Autonomy
The Science Autonomy project seeks to improve the accuracy and effectiveness of robotic planetary investigations by enabling automatic detection of relevant science features, classification of feature properties, and exploration planning that responds on-the-fly.
Search and Rescue
Giving Urban Search and Rescue workers more technological tools to help find and save victims of natural disasters.
Secure Agent Name Server
We are developing a secure agent name server which requires preregistration for deployment.
Sensabot Inspection Robot
NREC is developing an inspection robot for use in oil and gas production plants.
Sense and Avoid
We are developing Unmanned Aerial Vehicles (UAVs) that sense and avoid autonomously.
Shape Stable Body Frames
Simple Hands
Designing simple grippers for autonomous general purpose manipulation.
Simultaneous Localization and Mapping
We are developing a geometric mapping strategy that directs a mobile robot to explore an unknown environment while taking into consideration sensor and encoder uncertainty.
The Snackbot is a mobile robot designed to deliver food to the offices at CMU while engaging in meaningful social interaction.

Snake Robot Design
Analyzing the factors that are of importance in designing a snake robot, and implementing new designs.
Social Robots
We are developing robots with personality.
Soft Tissue Simulation for Plastic Surgery
Software Package for Precise Camera Calibration
A novel camera calibration method can increases not only an accuracy of intrinsic camera parameters but also an accuracy of stereo camera calibration by utilizing a single framework for square, circle, and ring planar calibration patterns.
Sonic FlashlightTM
We are developing a method of medical visualization that merges real time ultrasound images with direct human vision.
Spatio-Temporal Facial Expression Segmentation
A two-step approach temporally segment facial gestures from video sequences. It can register the rigid and non-rigid motion of the face.
Specialty Crop Automation
The Integrated Automation for Sustainable Specialty Crops Farming project teams the National Robotics Engineering Center (NREC), the University of Florida, Cornell University and John Deere to bring precision agriculture and autonomous equipment to citrus growers.
Spinner (UGCV)
With development of the Spinner unmanned ground vehicle, an NREC-led team delivered technical breakthroughs in mobility, mission endurance and payload fraction.
Stacking Planner
Generates plans for polyhedral sheet metal parts.
Strawberry Plant Sorter
NREC is developing an automated, machine vision-based strawberry plant sorter.
Stress Testing Autonomous Systems
Stress Tests for Autonomy Architectures (STAA) finds autonomy system safety problems that are unlikely to be discovered by other types of tests.
Sweep Monitoring (SMS)
NREC developed the Sweep Monitoring System (SMS) for training soldiers and demining personnel to use hand-held land mine detectors.
TechCaFE provides educators with simple and customizable tools to make learning fun for students. Through TechCaFE we are creating a suite of culturally and socially relevant computer and mobile phone based tools for enhancing English literacy skills among children and adults. This includes CaFE Teach, a web-accessible tool that teachers use to create and modify customized content for their students. Students can access and learn from the content added by teachers via CaFE Web, a web-based practice tool, or CaFE Phone, a mobile phone game. Current work on this project involves developing CaFE Play, which would serve as a platform for developers to create applications that incorporate teacher content within the context of games designed with the specific user population in mind.
Teleoperation Booth
NREC has developed an immersive teleoperation system that allows operators to remotely drive an unmanned ground vehicle (UGV) more effectively over complex terrain.
Teleoperation with a 12-DOF Coarse-Fine Manipulator
High-fidelity manipulation of remote environments using a 6-DOF robot equipped with a 6-DOF magnetic levitation fine-motion wrist.
Temporal Segmentation of Human Motion
Temporal segmentation of human motion

Temporal Shape-From-Silhouette
We are developing algorithms for the computation of 3D shape from multiple silhouette images captured across time.
Terrain Estimation using Space Carving Kernels
This project uses information about the ray extending from the sensor to the sensed surface be used to improve terrain estimation in unstructured environments.
Text Miner
We are developing Text Miner, a system that automatically classifies news reports on a company's financial outlook.
Texture Replacement in Real Images
We are developing methods to replace some specified texture patterns in an image while preserving lighting effects, shadows and occlusions.
The Aerial Robotic Infrastructure Analyst (ARIA)
The Aerial Robotic Infrastructure Analyst (ARIA) rapidly creates comprehensive, high-resolution, semantically rich 3D models of infrastructure – an interactive assistant for infrastructure inspection.
The Electric Cable Differential (ECD) Leg
We are designing a bipedal robot to be capable of running, walking, jumping, hopping, and generally behaving in a highly dynamic manner.
The Intelligent Workcell
This project is studying methods for augmenting industrial workcells with sensors and feedback mechanisms to enable workers and robots to operate safely in the same environment.
Tightly Integrated Stereo and LIDAR
The goal of this project is to use sparse, but accurate 3D data from LIDAR to improve the estimation of dense stereo algorithms in terms of accuracy and speed.
Time-optimal Vehicle Trajectories
What's the fastest way to drive a mobile robot?
Tooling Planner
Supports various decision making steps related to bending tools and press-brake setups.
Traffic Data Analysis
NREC and FHWA are developing techniques for automatically analyzing large amounts of video collected from vehicles traveling on highways.
Transforming Surface Representations to Volumetric Representations
This project’s goal is to transform the surface-based representations that are naturally derived from sensed data into volumetric representations needed by CAD and BIM.
Transitional Unmanned Ground Vehicle (TUGV)
Cross Country Navigation
Transportation Energy Resources from Renewable Agriculture (TERRA)
We are developing a robotic phenotyping systems for phenotyping crops for rapid breeding decisions. This system positions sensors within the canopy for measurements not observable from above or below. Machine learning and computer vision algorithms are then used to generate phenotyping data from the raw sensor data.
Treasure Hunt: Pickup Teams
We are developing a single heterogeneous human-robot team capable of effectively locating objects of interest (treasure) spread over a complex, previously unknown environment.
Tree Inventory
A tree inventory system uses vehicle-mounted sensors to automatically count and map the locations of trees in an orchard.
Tunnel Mapping
NREC is pioneering research and development of a low power, small, lightweight system for producing accurate 3D maps of tunnels through its Precision Tunnel Mapping program.
Tunnel Mapping
NREC is pioneering research and development of a low power, small, lightweight system for producing accurate 3D maps of tunnels through its Precision Tunnel Mapping program.
TURBO-PLAN: An Interactive Mission Planning Advisor
UAV/UGV Air-Ground Collaboration
This project is concerned with the development of a distributed estimation system of collaborating UAVs (Unmanned Aerial Vehicle) and AGVs (Autonomous Ground Vehicles) that detect, track and estimate the location of a person, vehicle or object of interest on the ground.
UGCV PerceptOR Integrated (UPI)
The UPI (UGCV PerceptOR Integrated) program integrates and enhances the results from UGCV and PerceptOR to increase the speed and autonomy of unmanned ground vehicles operating in complex terrain. By combining the inherent mobility of Spinner with advanced perception techniques including the use of learning and prior terrain data, the UPI program stresses system design across vehicle, sensors and software so that the strengths of one component compensate for the weaknesses of another.
Ultra-High-Density Data Cache for Low-Power Communications
Demonstrating technology for a 10 GB/cm2 rewritable data storage system using MEMS-based actuators and magnetic recording.
Underground Mining Operator Assist
Automating the functions of a continuous mining machine and roof bolting units.
Understanding and Modeling Trust in Human-Robot Interactions
This collaboration with the UMass Lowell Robotics Lab seeks to develop quantitative metrics to measure a user's trust in a robot as well as a model to estimate the user's level of trust in real time. Using this information, the robot will be able to adjust its interaction accordingly.

Unification of Component Analysis
This project aims to find the fundamental set of equations that unifies all component analysis methods.
Unmanned Ground Vehicle for Security (Terrascout)
We are developing autonomous ATVs to secure borders and facility perimeters.
Urban Challenge
Carnegie Mellon University and General Motors built an autonomous SUV that won first place in the 2007 DARPA Urban Challenge.
Urban Search and Rescue
We are developing Hybrid Teams of Autonomous Agents: Cyber Agents, Robots and People (CARPs) to address the challenges of urban search and rescue.
We are exploring a mix of physics-based and data driven high fidelity sensor modeling techniques. The goal is to develop a system that can provide much more realistic UGV simulation than current techniques. Such simulation will play a crucial role in speeding up the development cycle, and in validating platforms. Sponsored by the US ACE ERDC.
Vehicle Localization in Naturally Varying Environments
The purpose of this project is to develop methods for place matching that are invariant to short- and long-term environmental variations in support of autonomous vehicle localization in GPS-denied situations.
Very Rough Terrain Nonholonomic Trajectory Generation and Motion Planning for Rovers
We are developing rough terrain trajectory generation algorithms for local path planning and optimal regional motion planning methods using a constrained search space.
Visual SLAM for Industrial Robots
We are exploring algorithms to support visual mapping and localization for a robot vehicle operating in an industrial setting such as an LNG production plant. This work is sponsored by QNRF.
Visual Yield Mapping with Optimal and Generative Sampling Strategies
This research project aims to develop methods to automatically collect visual image data to infer, estimate and forecast crop yields -- producing yield maps with high-resolution, across large scales and with accuracy. To achieve efficiency and accuracy, statistical sampling strategies are designed for human-robot teams that are optimal in the number of samples, location of samples, cost of sampling and accuracy of crop estimates.
Perceptual, reasoning and learning abilities in autonomous mobile robots