Add to Watchlist

Learning Mobile Robot Behaviour Dynamics


Citation of segment
Embed Code
Purchasing a DVD Cite video

Automated Media Analysis

Recognized Entities
Speech transcript
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation
Computer animation


Formal Metadata

Title Learning Mobile Robot Behaviour Dynamics
Author Narayanan, Krishna Kumar
Posada, Luis-Felipe
Hoffmann, Frank
Bertram, Torsten
License CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
DOI 10.5446/15419
Publisher TU Dortmund, Lehrstuhl für Regelungssystemtechnik
Release Date 2012
Language Silent film
Production Year 2012
Production Place Dortmund

Content Metadata

Subject Area Engineering
Abstract This video shows the mobile robot behavior dynamics learned from demonstration examples. The mobile robot is equipped with an omnidirectional camera with a 360 degree horizontal and 75 deg vertical field of view directed towards the bottom to capture the floor. An ensemble of experts segmentation scheme distinguishes the omnidirectional image into floor and non-floor regions. Three indoor robotic behaviors viz. corridor following, obstacle avoidance and homing are tele-operated to the robot during the demonstration phase by a teacher during which the omnidirectional image and the corresponding executed actions are recorded. The individual behaviors are then represented by a dynamic system that couples the perception and the performed action. Thus every behavior possesses a behavioral dynamics and the variables that characterize this dynamics are called behavioral variables. Here the behavioral dynamics are represented using Gaussian Mixture Models parameters of whose are identified from demonstrations. The recorded behavioral variables are Corridor Following : Rotational velocity, lateral offset of the robot to the corridor (alpha) and orientation error of the robot to the center of the corridor (beta) Obstacle avoidance : Rotational velocity, sinusoid of the next traversible safe direction sin(theta ) and the cosinus of the orientation of the nearest obstacle times the inverse of its distance cos(theta ).1/d Homing : Rotational velocity, distance to the goal point, orientation to the goal point The Homing or docking zone are marked by two red circles whose midpoint of the virtual line connecting the centroids is the docking/homing point. The three behaviors are coordinated manually either by behavior arbitration (subsumption architecture) or by command fusion (weighted summation). The video shows the learned behaviors individually performing the task and finally the fused behavior architecture navigating through the indoor environment and docking to the final goal point. 0:00 Corridor following 0:27 Obstacle avoidance 1:02 Homing 1:39 Behavior coordination via command fusion 3:20 Credits
Keywords navigation
gaussian mixture model
learning from demonstration
visual behaviors
obstacle avoidance


AV-Portal 3.5.0 (cb7a58240982536f976b3fae0db2d7d34ae7e46b)


  422 ms - page object