Loading Events

RI Seminar

January

23
Fri
Adrien Treuille, Chris Urmson, Yaser Sheikh Robotics Institute
Friday, January 23
3:30 pm to 12:00 am
Analyzing Dynamic Scenes from Moving Cameras & New Approaches to the Simulation and Control of Complex Dynamics & Advancing Self-Driving Cars

Event Location: NSH 1305
Bio: Yaser Sheikh:
Yaser Sheikh is an assistant research professor in the Robotics
Institute at Carnegie Mellon University. His research interest is in
the field of computer vision, primarily in analyzing dynamic scenes
including scene reconstruction, the geometry of mobile camera
networks, and nonrigid motion estimation, with a particular focus on
analyzing human activity. He obtained his doctoral degree from the
University of Central Florida in May 2006 and from May 2006 to May
2008 he was a postdoctoral fellow at Carnegie Mellon University. He is
a recipient of the Hillman award for excellence in computer science
research.

Chris Urmson:
Chris Urmson has spent the last ten and a half years at the RI, first as a
student and now as a faculty member. He has had the good fortune of working
with many great teams and been involved with projects as diverse as Nomad’s
antarctic meteorite search, the AAAI robot challenge, and all three DARPA
Challenges. He is currently an Assistant Research Professor in the RI. He
earned his PhD in robotics in 2005 and earned his B.Sc in computer
engineering, from the University of Manitoba, in 1998. Between 2006 and
2008 Chris was the Director of Technology for the Urban Challenge, helping
the team win the 2007 event.

Abstract: Yaser Sheikh:
With the proliferation of camera-enabled cell phones, domestic robots,
and wearable computers, moving cameras are being introduced at
an unprecedented rate into our social space. The confluence of camera
motion and the motion of objects in the scene complicates the task of
understanding the scene from video. In this talk, I discuss how and
when it is possible to disambiguate these two sources of motion,
towards the goal of analyzing dynamic scenes from moving cameras.

Adrien Treuille:
This talk discusses recent work on real-time simulation problem that
develops new, low-dimensional representations of the phenomena being
simulated. These representations perform several functions at once.
First, they can correlate many degrees of freedom of the underlying
phenomenon, allowing us to represent the system with fewer variables.
Second, the reduction is constructed so as to allow rapid simulation
or control. Finally, the representations allows us to express
correctness constraints.

Three examples of such simulations are presented covering fluids,
crowds, and human animation. The fluid model enables large, real-time,
detailed flows with continuous user interaction, and can handle moving
objects immersed in the flow. The crowd model is based on a fluid-like
continuum representation, and naturally exhibit emergent phenomena
that have been observed in real crowds. Finally, the human model can
automatically compute near-optimal human animations using a low-
dimensional basis representation of the planning space.

Chris Urmson:
In this talk I will introduce research projects that are continuing the
development of autonomous vehicle technology beyond the Urban Challenge.
These projects attempt to answer questions such as how do we build a safe,
commercial autonomous vehicle? how can we exploit the massive amounts of
digital imagery available to make driving safer and more robust? and, how
can we exploit many-core processor technology to improve motion planning?

The Pathfinder project is developing fully autonomous off-highway trucks,
capable of operating at high levels of performance in demanding mine
environments. I will introduce some of the challenges in automating a 300+
ton truck capable of driving at 42 mph.

Digital aerial imagery provides a substantial amount of information. To
date, there are only a few examples of autonomous vehicles that attempt to
exploit this wealth of data. We are a few months into a project to exploit
aerial imagery to generate road models of parking lots.

Multi/Many-core processors are now readily available in the form of
graphical processing units (GPU’s) found in video cards. These processors
are in general less expensive then contemporary central processing units
(CPU’s) and have substantial processing power. The significant
architectural differences between GPU’s and CPU’s requires new algorithms
to achieve the most benefit from these processors. I will present our
initial work in developing efficient GPU-appropriate algorithms for motion
planning.