Listen to this article
|
In the ever-evolving world of robotics, the seamless integration of technologies promises to revolutionize how humans interact with machines. An example of transformative innovation, the emergence of time-of-flight or ToF sensors is crucial in enabling mobile robots to better perceive the world around them.
ToF sensors have a similar application to lidar technology in that both use multiple sensors for creating depth maps. However, the key distinction lies in these cameras‘ ability to provide depth images that can be processed faster, and they can be built into systems for various applications.
This maximizes the utility of ToF technology in robotics. It has the potential to benefit industries reliant on precise navigation and interaction.
Why mobile robots need 3D vision
Historically, RGB cameras were the primary sensor for industrial robots, capturing 2D images based on color information in a scene. These 2D cameras have been used for decades in industrial settings to guide robot arms in pick-and-pack applications.
Such 2D RGB cameras always require a camera-to-arm calibration sequence to map scene data to the robot’s world coordinate system. 2D cameras are unable to gauge distances without this calibration sequence, thus making them unusable as sensors for obstacle avoidance and guidance.
Autonomous mobile robots (AMRs) must accurately perceive the changing world around them to avoid obstacles and build a world map while remaining localized within that map. Time-of-flight sensors have been in existence since the late 1970s and have evolved to become one of the leading technologies for extracting depth data. It was natural to adopt ToF sensors to guide AMRs around their environments.
Lidar was adopted as one of the early types of ToF sensors to enable AMRs to sense the world around them. Lidar bounces a laser light pulse off of surfaces and measures the distance from the sensor to the surface.
However, the first lidar sensors could only perceive a slice of the world around the robot using the flight path of a single laser line. These lidar units were typically positioned between 4 and 12 in. above the ground, and they could only see things that broke through that plane of light.
The next generation of AMRs began to employ 3D stereo RGB cameras that provide 3D depth information data. These sensors use two stereo-mounted RGB cameras and a “light dot projector” that enables the camera array to accurately view the projected light on the science in front of the camera.
Companies such as Photoneo and Intel RealSense were two of the early 3D RGB camera developers in this market. These cameras initially enabled industrial applications such as identifying and picking individual items from bins.
Until the advent of these sensors, bin picking was known as a “holy grail” application, one which the vision guidance community knew would be difficult to solve.
The camera landscape evolves
A salient feature is the cameras’ low-light performance which prioritizes human-eye safety. The 6 m (19.6 ft.) range in far mode facilitates optimal people and object detection, while the close-range mode excels in volume measurement and quality inspection.
The cameras return the data in the form of a “point cloud.” On-camera processing capability mitigates computational overhead and is potentially useful for applications like warehouse robots, service robots, robotic arms, autonomous guided vehicles (AGVs), people-counting systems, 3D face recognition for anti-spoofing, and patient care and monitoring.
Time-of-flight technology is significantly more affordable than other 3D-depth range-scanning technologies like structured-light camera/projector systems.
For instance, ToF sensors facilitate the autonomous movement of outdoor delivery robots by precisely measuring depth in real time. This versatile application of ToF cameras in robotics promises to serve industries reliant on precise navigation and interaction.
How ToF sensors take perception a step further
A fundamental difference between time-of-flight and RGB cameras is their ability to perceive depth. RGB cameras capture images based on color information, whereas ToF cameras measure the time taken for light to bounce off an object and return, thus rendering intricate depth perception.
ToF sensors capture data to generate intricate 3D maps of surroundings with unparalleled precision, thus endowing mobile robots with an added dimension of depth perception.
Furthermore, stereo vision technology has also evolved. Using an IR pattern projector, it illuminates the scene and compares disparities of stereo images from two 2D sensors – ensuring superior low-light performance.
In comparison, ToF cameras use a sensor, a lighting unit, and a depth-processing unit. This allows AMRs to have full depth-perception capabilities out of the box without further calibration.
One key advantage of ToF cameras is that they work by extracting 3D images at high frame rates — with the rapid division of the background and foreground. They can also function in both light and dark lighting conditions through the use of active lighting components.
In summary, compared with RGB cameras, ToF cameras can operate in low-light applications and without the need for calibration. ToF camera units can also be more affordable than stereo RGB cameras or most lidar units.
One downside for ToF cameras is that they must be used in isolation, as their emitters can confuse nearby cameras. ToF cameras also cannot be used in overly bright environments because the ambient light can wash out the emitted light source.
Applications of ToF sensors
ToF cameras are enabling multiple AMR/AGV applications in warehouses. These cameras provide warehouse operations with depth perception intelligence that enables robots to see the world around them. This data enables the robots to make critical business decisions with accuracy, convenience, and speed. These include functionalities such as:
- Localization: This helps AMRs identify positions by scanning the surroundings to create a map and match the information collected to known data
- Mapping: It creates a map by using the transit time of the light reflected from the target object with the SLAM (simultaneous localization and mapping) algorithm
- Navigation: Can move from Point A to Point B on a known map
With ToF technology, AMRs can understand their environment in 3D before deciding the path to be taken to avoid obstacles.
Finally, there’s odometry, the process of estimating any change in the position of the mobile robot over some time by analyzing data from motion sensors. ToF technology has shown that it can be fused with other sensors to improve the accuracy of AMRs.
About the author
Maharajan Veerabahu has more than two decades of experience in embedded software and product development, and he is a co-founder and vice president of product development services at e-con Systems, a prominent OEM camera product and design services company. Veerabahu is also a co-founder of VisAi Labs, a computer vision and AI R&D unit that provides vision AI-based solutions for their camera customers.
Tell Us What You Think!