Researchers help robots navigate crowded places with new visual perception method

Researchers help robots navigate crowded places with new visual perception method

Credit: University of Toronto

A team of researchers at the University of Toronto has found a way to enhance the visual perception of robotic systems by combining two different types of neural networks.

Innovation can help self-driving vehicles Navigate busy streets or enable medical robots to operate effectively in crowded hospital hallways.

says Jonathan Kelly, associate professor at the University of Toronto’s Institute for Space Studies in the School of Applied Sciences and Engineering.

“What we did instead was the careful study of how the pieces fit together. Specifically, we investigated how two pieces of a motion estimation problem—the precise perception of depth and motion—were linked together in a powerful way.”

Researchers at Kelly’s Space and Terrestrial Autonomous Robotic Systems Laboratory aim to build reliable systems that can help humans accomplish a variety of tasks. For example, they designed an electric wheelchair that can automate some common tasks such as moving through doorways.

More recently, they have focused on technologies that will help robots move out of the carefully controlled environments in which they are commonly used today and into the less predictable world in which humans are accustomed to navigating.






Credit: University of Toronto

“Ultimately, we look to develop situational awareness of the highly dynamic environments where people work, whether it’s a busy hospital corridor, a crowded public square, or a city street full of traffic and pedestrians,” Kelly says.

One of the difficult problems that robots have to solve in all of these spaces is known to the robotics community as “structure from motion”. This is the process by which robots collect a set of images taken from a moving camera to build a 3D model of the environment in which they are located. This process is similar to the way humans use their eyes to perceive the world around them.

In current robotic systems, architecture is usually achieved from motion in two steps, each using different information from a set of monocular images. One is depth perceptionwho tells a file Robot How far are objects in their field of vision? The other, known as egomotion, describes the 3D movement of a robot in relation to its environment.

“Any robot navigating within space needs to know how static and dynamic objects relate to themselves, as well as how their movement changes the landscape,” says Kelly. “For example, when a train is moving along a track, a passenger looking out of the window can notice that objects in the distance appear to be moving slowly, while they overtake neighboring objects.”

The challenge is that in many current systems, depth estimation is separated from motion estimation – there is no explicit information sharing between the two neural networks. Combining depth and motion estimation together ensures that they match each other.

“There are limitations to depth that are determined by movement, and there are limitations to movement that are determined by depth,” Kelly says. “If the system does not couple these two components of a neural network, the end result is an inaccurate estimate of where everything is in the world and where the robot is.”

In a recent study, two of Kelly’s students – Brandon Wagstaff, Ph.D. Candidate, and former Ph.D. Student Valentin Peretroukhin – He examined and improved the existing structure through methods of movement.

Their new system renders autoimmune motion prediction a function of depth, which increases the system’s overall accuracy and reliability. They recently presented their work at the International Conference on Intelligent Robots and Systems (IROS) in Kyoto, Japan.






Credit: UTIAS STARS Lab

“Compared with learning-based methods, our new system was able to reduce motion estimation error by about 50%,” Wagstaff says.

“This improvement in the accuracy of motion estimation was demonstrated not only on data similar to that used in network training, but also on significantly different forms of data, indicating that the proposed method was able to generalize across many different environments.”

Maintaining accuracy when working in new environments is a challenge for neural networks. The team has since expanded their research beyond estimating optical motion to include inertial sensing – an additional sensor similar to the vestibular apparatus in the human ear.

“We are now working on robotic applications that can simulate human eyes and inner ears, which provide information about balance, movement and acceleration,” Kelly says.

“This will enable even more accurate a movement An appreciation for dealing with situations such as dramatic scene changes – like environment It suddenly gets darker when a car enters a tunnel, or the camera malfunctions when you look directly at the sun.”

The potential applications for such new approaches are diverse, from improving the handling of self-driving vehicles to enabling drones to fly safely through crowded environments to deliver goods or conduct environmental monitoring.

“We don’t build caged machines,” says Kelly. “We want to design powerful robots that can move safely around people and environments.”

Introduction of
University of Toronto


the quote: Researchers Helping Robots Navigate Crowded Spaces in New Visual Perception Method (2022, November 10), Retrieved November 10, 2022 from https://techxplore.com/news/2022-11-robots-crowded-spaces-visual-perception .html

This document is subject to copyright. Notwithstanding any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.


#Researchers #robots #navigate #crowded #places #visual #perception #method

Leave a Comment

Your email address will not be published. Required fields are marked *