Evolution of Robots

How robots perceive the world around them

robots-learning

      Thanks to artificial intelligence, robots can do incredible things, such as working with humans collaboratively in factories, delivering packages quickly to a warehouse, and exploring the surface of Mars. 

     But despite these feats, we're only just beginning to see robots capable of brewing a good cup of coffee. For robots, being able to perceive and understand the world around them is essential for easy integration.

     Such routine practices, such as turning on the coffee machine, dispensing beans and finding milk and sugar, require certain perceptual abilities that remain fantasies for many machines.

     However, this is changing. Several different technologies are being used to help robots better perceive the environment in which they work. This includes understanding the objects around them and measuring distance. Below is a sample of these technologies.

          The LiDAR: light and laser distance sensors

     Several companies are developing LiDAR (light and laser-based distance measurement and object detection) technologies to help robots and autonomous vehicles perceive surrounding objects. 

     The principle of LiDAR is simply to shine a light on a surface and measure the time it takes to return to its source.

lidar-autonomous-car

      By emitting rapid pulses of laser light onto a surface in rapid succession, the sensor can create a complex "map" of the surface it is measuring. There are currently three main types of sensors: single-beam sensors, multi-beam sensors, and rotation sensors.

      Single beam sensors produce a beam of light and are typically used to measure distances between large objects, such as walls, floors, and ceilings. In single-beam sensors, the beams can be divided into highly collimated beams, similar to those used in laser pointers (i.e. the beam will remain small throughout the range). LED and pulsed diode beams are similar to flashlights (i.e. the beam will diverge over long distances).

     Multi-beam sensors produce several detection beams simultaneously and are ideal for avoiding objects and collisions. Finally, rotation sensors produce a single beam as the device rotates and is often used for object detection and avoidance.

     Part Detection Sensors

     An important task, often entrusted to robots, especially in the manufacturing industry, is to pick up objects. More specifically, a robot needs to know where an object is and whether it is ready to be picked up. This requires the work of various sensors to help the machine detect the position and orientation of the object. A robot may already have sensors built into its sensing capabilities, which may be a suitable solution if you only want to detect the presence or absence of an object.

     Part detection sensors are commonly used in industrial robots and can detect whether a part has arrived at a particular location. There are different types of sensors, each with unique capabilities, including detecting the presence, shape, distance, color, and orientation of an object.

     Robotic vision sensors offer several high-tech benefits to collaborative robots in all industries. 2D and 3D views allow robots to manipulate different parts without reprogramming, to capture objects of unknown position and orientation, and to correct inaccuracies.

     3D vision and the future of robot "senses

     The introduction of robots into more intimate aspects of our lives (such as our homes) requires a deeper and more nuanced understanding of three-dimensional objects. While robots can certainly "see" objects using cameras and sensors, it is more difficult to interpret what they see at a glance.


     A robot perception algorithm, developed by a graduate student at Duke University and his thesis supervisor, makes it possible to guess what an object is, how it is oriented, and to "imagine" all the parts of the object that might be invisible.

     The algorithm was developed using 4,000 full 3D scans of common household objects, including an assortment of beds, chairs, desks, monitors, chests of drawers, bedside tables, tables, bathtubs, and sofas. Each scan was then broken down into tens of thousands of voxels, stacked one on top of the other, for easy processing.

     Using principal component probabilistic analysis, the algorithm learned the categories of objects, their similarities, and their differences. 

     This allows it to understand what a new object is without having to sift through its entire catalog to find a match.

     Although still in its infancy, the implementation of this (or similar) algorithm is pushing robotics even further to work in tandem with humans in environments that are much less structured and predictable than a laboratory, factory or manufacturing plant.

     The ability to perceive and interact with surrounding objects and the environment is essential to robotic functionality and applications working alongside humans. As technology advances, there will undoubtedly be a need for increased robotics education and literacy, as well as more robotics technicians.

Post a Comment

أحدث أقدم