Sensors / Sensing Systems Archives - The Robot Report https://www.therobotreport.com/category/technologies/sensors-sensing/ Robotics news, research and analysis Tue, 25 Jun 2024 14:27:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Sensors / Sensing Systems Archives - The Robot Report https://www.therobotreport.com/category/technologies/sensors-sensing/ 32 32 RBR50 Spotlight: Glidance provides independence to visually impaired individuals https://www.therobotreport.com/rbr50-spotlight-glidance-provides-independence-visually-impaired-individuals/ https://www.therobotreport.com/rbr50-spotlight-glidance-provides-independence-visually-impaired-individuals/#respond Tue, 25 Jun 2024 10:00:17 +0000 https://www.therobotreport.com/?p=579438 Glide, the flagship product of Glidance, helps visually impaired people move independently using sophisticated guiding technology and a unique mechanical design.

The post RBR50 Spotlight: Glidance provides independence to visually impaired individuals appeared first on The Robot Report.

]]>
rbr50 banner with a women in a crosswalk using the glidance device.


Organization: Glidance Inc.
Country: U.S.
Website: https://glidance.io/
Year Founded: 2023
Number of Employees: 2-10
Innovation Class: Application


In the U.S., around 1 million adults are blind. Yet, only 2% to 8% use a white cane for navigation. Most instead rely on guide dogs or sighted companions, according to the Perkins School for the Blind. This reliance limits independence and mobility, a challenge that Glidance, winner of the 2023 RoboBusiness Pitchfire Competition, aims to address through robotics.

rbr50 banner logo.The company’s flagship product, Glide, offers autonomous mobility assistance for the visually impaired. It incorporates advanced guiding technologies and a unique mechanical design to foster independence.

Glide’s innovation provides haptic and audio feedback for safety. The product has garnered praise from industry experts, particularly for its potential to serve an underserved market segment. CEO Amos Miller, himself blind, brings a firsthand perspective to Glidance’s mission.

The Seattle-based company plans to sell the device for about the cost of a new cell phone. It will also offer subscription plans that enable feature updates and options that will make Glide easily configurable for each user’s needs.

With a battery life of up to eight hours and quick user adaptation, Glide promises to revolutionize mobility for the visually impaired. Founded by Miller and Mike Sinclair, Glidance represents hope for those seeking greater freedom and autonomy in navigating the world.

Miller noted that more than 50,000 individuals lose their sight every year, yet worldwide there are only 10,000 working guide dogs any year. This leaves a huge gap and providing the opportunity for a device like Glide to make a huge difference in users’ lives. It can cost up to $50,000 annually to train and care for a guide dog throughout its working lifetime with a person who is blind.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Explore the RBR50 Robotics Innovation Awards 2024.


RBR50 Robotics Innovation Awards 2024

OrganizationInnovation
ABB RoboticsModular industrial robot arms offer flexibility
Advanced Construction RoboticsIronBOT makes rebar installation faster, safer
Agility RoboticsDigit humanoid gets feet wet with logistics work
Amazon RoboticsAmazon strengthens portfolio with heavy-duty AGV
Ambi RoboticsAmbiSort uses real-world data to improve picking
ApptronikApollo humanoid features bespoke linear actuators
Boston DynamicsAtlas shows off unique skills for humanoid
BrightpickAutopicker applies mobile manipulation, AI to warehouses
Capra RoboticsHircus AMR bridges gap between indoor, outdoor logistics
DexterityDexterity stacks robotics and AI for truck loading
DisneyDisney brings beloved characters to life through robotics
DoosanApp-like Dart-Suite eases cobot programming
Electric SheepVertical integration positions landscaping startup for success
ExotecSkypod ASRS scales to serve automotive supplier
FANUCFANUC ships one-millionth industrial robot
FigureStartup builds working humanoid within one year
Fraunhofer Institute for Material Flow and LogisticsevoBot features unique mobile manipulator design
Gardarika TresDevelops de-mining robot for Ukraine
Geek+Upgrades PopPick goods-to-person system
GlidanceProvides independence to visually impaired individuals
Harvard UniversityExoskeleton improves walking for people with Parkinson’s disease
ifm efectorObstacle Detection System simplifies mobile robot development
igusReBeL cobot gets low-cost, human-like hand
InstockInstock turns fulfillment processes upside down with ASRS
Kodama SystemsStartup uses robotics to prevent wildfires
Kodiak RoboticsAutonomous pickup truck to enhance U.S. military operations
KUKARobotic arm leader doubles down on mobile robots for logistics
Locus RoboticsMobile robot leader surpasses 2 billion picks
MassRobotics AcceleratorEquity-free accelerator positions startups for success
MecademicMCS500 SCARA robot accelerates micro-automation
MITRobotic ventricle advances understanding of heart disease
MujinTruckBot accelerates automated truck unloading
MushinyIntelligent 3D sorter ramps up throughput, flexibility
NASAMOXIE completes historic oxygen-making mission on Mars
Neya SystemsDevelopment of cybersecurity standards harden AGVs
NVIDIANova Carter gives mobile robots all-around sight
Olive RoboticsEdgeROS eases robotics development process
OpenAILLMs enable embedded AI to flourish
OpteranApplies insect intelligence to mobile robot navigation
Renovate RoboticsRufus robot automates installation of roof shingles
RobelAutomates railway repairs to overcome labor shortage
Robust AICarter AMR joins DHL's impressive robotics portfolio
Rockwell AutomationAdds OTTO Motors mobile robots to manufacturing lineup
SereactPickGPT harnesses power of generative AI for robotics
Simbe RoboticsScales inventory robotics deal with BJ’s Wholesale Club
Slip RoboticsSimplifies trailer loading/unloading with heavy-duty AMR
SymboticWalmart-backed company rides wave of logistics automation demand
Toyota Research InstituteBuilds large behavior models for fast robot teaching
ULC TechnologiesCable Splicing Machine improve safety, power grid reliability
Universal RobotsCobot leader strengthens lineup with UR30

The post RBR50 Spotlight: Glidance provides independence to visually impaired individuals appeared first on The Robot Report.

]]>
https://www.therobotreport.com/rbr50-spotlight-glidance-provides-independence-visually-impaired-individuals/feed/ 0
At CVPR, NVIDIA offers Omniverse microservices, shows advances in visual generative AI https://www.therobotreport.com/nvidia-offers-omniverse-microservices-advances-visual-generative-ai-cvpr/ https://www.therobotreport.com/nvidia-offers-omniverse-microservices-advances-visual-generative-ai-cvpr/#respond Mon, 17 Jun 2024 13:00:07 +0000 https://www.therobotreport.com/?p=579457 Omniverse Cloud Sensor RTX can generate synthetic data for robotics, says NVIDIA, which is presenting over 50 research papers at CVPR.

The post At CVPR, NVIDIA offers Omniverse microservices, shows advances in visual generative AI appeared first on The Robot Report.

]]>
NVIDIA Omniverse Cloud Sensor RTX Generates Synthetic Data to Speed AI Development of Autonomous Vehicles, Robotic Arms, Mobile Robots, Humanoids and Smart Spaces

As shown at CVPR, Omniverse Cloud Sensor RTX microservices generate high-fidelity sensor simulation from
an autonomous vehicle (left) and an autonomous mobile robot (right). Sources: NVIDIA, Fraunhofer IML (right)

NVIDIA Corp. today announced NVIDIA Omniverse Cloud Sensor RTX, a set of microservices that enable physically accurate sensor simulation to accelerate the development of all kinds of autonomous machines.

NVIDIA researchers are also presenting 50 research projects around visual generative AI at the Computer Vision and Pattern Recognition, or CVPR, conference this week in Seattle. They include new techniques to create and interpret images, videos, and 3D environments. In addition, the company said it has created its largest indoor synthetic dataset with Omniverse for CVPR’s AI City Challenge.

Sensors provide industrial manipulators, mobile robots, autonomous vehicles, humanoids, and smart spaces with the data they need to comprehend the physical world and make informed decisions.

NVIDIA said developers can use Omniverse Cloud Sensor RTX to test sensor perception and associated AI software in physically accurate, realistic virtual environments before real-world deployment. This can enhance safety while saving time and costs, it said.

“Developing safe and reliable autonomous machines powered by generative physical AI requires training and testing in physically based virtual worlds,” stated Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA. “Omniverse Cloud Sensor RTX microservices will enable developers to easily build large-scale digital twins of factories, cities and even Earth — helping accelerate the next wave of AI.”

Omniverse Cloud Sensor RTX supports simulation at scale

Built on the OpenUSD framework and powered by NVIDIA RTX ray-tracing and neural-rendering technologies, Omniverse Cloud Sensor RTX combines real-world data from videos, cameras, radar, and lidar with synthetic data.

Omniverse Cloud Sensor RTX includes software application programming interfaces (APIs) to accelerate the development of autonomous machines for any industry, NVIDIA said.

Even for scenarios with limited real-world data, the microservices can simulate a broad range of activities, claimed the company. It cited examples such as whether a robotic arm is operating correctly, an airport luggage carousel is functional, a tree branch is blocking a roadway, a factory conveyor belt is in motion, or a robot or person is nearby.

Microservice to be available for AV development 

CARLA, Foretellix, and MathWorks are among the first software developers with access to Omniverse Cloud Sensor RTX for autonomous vehicles (AVs). The microservices will also enable sensor makers to validate and integrate digital twins of their systems in virtual environments, reducing the time needed for physical prototyping, said NVIDIA.

Omniverse Cloud Sensor RTX will be generally available later this year. NVIDIA noted that its announcement coincided with its first-place win at the Autonomous Grand Challenge for End-to-End Driving at Scale at CVPR.

The NVIDIA researchers’ winning workflow can be replicated in high-fidelity simulated environments with Omniverse Cloud Sensor RTX. Developers can use it to test self-driving scenarios in physically accurate environments before deploying AVs in the real world, said the company.

Two of NVIDIA’s papers — one on the training dynamics of diffusion models and another on high-definition maps for autonomous vehicles — are finalists for the Best Paper Awards at CVPR.

The company also said its win for the End-to-End Driving at Scale track demonstrates its use of generative AI for comprehensive self-driving models. The winning submission outperformed more than 450 entries worldwide and received CVPR’s Innovation Award.

Collectively, the work introduces artificial intelligence models that could accelerate the training of robots for manufacturing, enable artists to more quickly realize their visions, and help healthcare workers process radiology reports.

“Artificial intelligence — and generative AI in particular — represents a pivotal technological advancement,” said Jan Kautz, vice president of learning and perception research at NVIDIA. “At CVPR, NVIDIA Research is sharing how we’re pushing the boundaries of what’s possible — from powerful image-generation models that could supercharge professional creators to autonomous driving software that could help enable next-generation self-driving cars.”

Foundation model eases object pose estimation

NVIDIA researchers at CVPR are also presenting FoundationPose, a foundation model for object pose estimation and tracking that can be instantly applied to new objects during inference, without the need for fine tuning. The model uses either a small set of reference images or a 3D representation of an object to understand its shape. It set a new record on a benchmark for object pose estimation.

FoundationPose can then identify and track how that object moves and rotates in 3D across a video, even in poor lighting conditions or complex scenes with visual obstructions, explained NVIDIA.

Industrial robots could use FoundationPose to identify and track the objects they interact with. Augmented reality (AR) applications could also use it with AI to overlay visuals on a live scene.

NeRFDeformer transforms data from a single image

NVIDIA’s research includes a text-to-image model that can be customized to depict a specific object or character, a new model for object-pose estimation, a technique to edit neural radiance fields (NeRFs), and a visual language model that can understand memes. Additional papers introduce domain-specific innovations for industries including automotive, healthcare, and robotics.

A NeRF is an AI model that can render a 3D scene based on a series of 2D images taken from different positions in the environment. In robotics, NeRFs can generate immersive 3D renders of complex real-world scenes, such as a cluttered room or a construction site.

However, to make any changes, developers would need to manually define how the scene has transformed — or remake the NeRF entirely.

Researchers from the University of Illinois Urbana-Champaign and NVIDIA have simplified the process with NeRFDeformer. The method can transform an existing NeRF using a single RGB-D image, which is a combination of a normal photo and a depth map that captures how far each object in a scene is from the camera.

NVIDIA researchers have simplified the process of generating a 3D scene from 2D images using NeRFs.

Researchers have simplified the process of generating a 3D scene from 2D images using NeRFs. Source: NVIDIA

JeDi model shows how to simplify image creation at CVPR

Creators typically use diffusion models to generate specific images based on text prompts. Prior research focused on the user training a model on a custom dataset, but the fine-tuning process can be time-consuming and inaccessible to general users, said NVIDIA.

JeDi, a paper by researchers from Johns Hopkins University, Toyota Technological Institute at Chicago, and NVIDIA, proposes a new technique that allows users to personalize the output of a diffusion model within a couple of seconds using reference images. The team found that the model outperforms existing methods.

NVIDIA added that JeDi can be combined with retrieval-augmented generation, or RAG, to generate visuals specific to a database, such as a brand’s product catalog.

JeDi is a new technique that allows users to easily personalize the output of a diffusion model within a couple of seconds using reference images, like an astronaut cat that can be placed in different environments.

JeDi is a new technique that allows users to easily personalize the output of a diffusion model within a couple of seconds using reference images, like an astronaut cat that can be placed in different environments. Source: NVIDIA

Visual language model helps AI get the picture

NVIDIA said it has collaborated with the Massachusetts Institute of Technology (MIT) to advance the state of the art for vision language models, which are generative AI models that can process videos, images, and text. The partners developed VILA, a family of open-source visual language models that they said outperforms prior neural networks on benchmarks that test how well AI models answer questions about images.

VILA’s pretraining process provided enhanced world knowledge, stronger in-context learning, and the ability to reason across multiple images, claimed the MIT and NVIDIA team.

The VILA model family can be optimized for inference using the NVIDIA TensorRT-LLM open-source library and can be deployed on NVIDIA GPUs in data centers, workstations, and edge devices.

As shown at CVPR, VILA can understand memes and reason based on multiple images or video frames.

VILA can understand memes and reason based on multiple images or video frames. Source: NVIDIA

Generative AI drives AV, smart city research at CVPR

NVIDIA Research has hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars, and robotics. A dozen of the NVIDIA-authored CVPR papers focus on autonomous vehicle research.

Producing and Leveraging Online Map Uncertainty in Trajectory Prediction,” a paper authored by researchers from the University of Toronto and NVIDIA, has been selected as one of 24 finalists for CVPR’s best paper award.

In addition, Sanja Fidler, vice president of AI research at NVIDIA, will present on vision language models at the Workshop on Autonomous Driving today.

NVIDIA has contributed to the CVPR AI City Challenge for the eighth consecutive year to help advance research and development for smart cities and industrial automation. The challenge’s datasets were generated using NVIDIA Omniverse, a platform of APIs, software development kits (SDKs), and services for building applications and workflows based on Universal Scene Description (OpenUSD).

AI City Challenge synthetic datasets span multiple environments generated by NVIDIA Omniverse, allowing hundreds of teams to test AI models in physical settings such as retail and warehouse environments to enhance operational efficiency.

AI City Challenge synthetic datasets span multiple environments generated by NVIDIA Omniverse, allowing hundreds of teams to test AI models in physical settings such as retail and warehouse environments to enhance operational efficiency. Source: NVIDIA

Isha Salian headshot.About the author

Isha Salian writes about deep learning, science and healthcare, among other topics, as part of NVIDIA’s corporate communications team. She first joined the company as an intern in summer 2015. Isha has a journalism M.A., as well as undergraduate degrees in communication and English, from Stanford.

The post At CVPR, NVIDIA offers Omniverse microservices, shows advances in visual generative AI appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-offers-omniverse-microservices-advances-visual-generative-ai-cvpr/feed/ 0
igus acquires Atronia, invests in smart plastics sensors for Industry 4.0 https://www.therobotreport.com/igus-acquires-atronia-invests-in-smart-plastics-sensors-for-industry-4-0/ https://www.therobotreport.com/igus-acquires-atronia-invests-in-smart-plastics-sensors-for-industry-4-0/#respond Thu, 13 Jun 2024 23:17:46 +0000 https://www.therobotreport.com/?p=579405 igus says its acquisition of Atronia will enable cost-effective series production of smart plastics for predictive maintenance.

The post igus acquires Atronia, invests in smart plastics sensors for Industry 4.0 appeared first on The Robot Report.

]]>
igus has acquired its Atronia, its partner for the iSense EC.W sensor.

igus has acquired Atronia, its partner for the iSense EC.W sensor. Source: igus

The combination of sensing with motion plastics, which eliminate the need for lubricants, promises easier-to-maintain machinery for Industry 4.0. igus, a global leader in motion plastics and moving cable management systems, last week acquired the majority stake in Atronia Tailored Sensing.

“The acquisition of Atronia by igus is a promising partnership that will undoubtedly lead to further breakthrough innovations and improved technology integration,” stated Carlos Alexandre Ferreira, manager at Atronia Tailored Systems. 

Gafanha da Nazaré, Portugal-based Atronia develops wireless products for measuring and monitoring applications. The company said it supports product-development strategies including innovation, renewables, and sensing.

Since 2016, Atronia has supported igus with smart plastic sensors. These sensors monitor the condition of the product, whether it needs to be serviced or replaced, or whether a problem is occurring. igus said this strategic acquisition is intended to help expand its market offerings for for networked plastic components.

Industry 4.0 demands mass production of critical sensors

Industry 4.0 encompasses automation, artificial intelligence, and networking for greater productivity, agility, and safety. igus said its goal is to mass-produce next-generation products and make them accessible to small and midsize businesses (SMBs).

For years, igus has invested in research and development for new types of smart plastics. The Rumford, R.I.-based company‘s lines include plain bearings, energy chains, and cables that are equipped with sensors and integrated into the Internet of Things (IoT).

“Intelligent predictive-maintenance software calculates optimum maintenance times and alerts technicians in good time via e-mail and text message in the event of critical conditions to prevent expensive system failures,” explained igus. The company recently won an RBR50 Robotics Innovation Award winner for a gripper for its ReBeL collaborative robot.

Michael Blass, CEO of igus e-chain Systems, and Carlos Alexandre Ferreira, Manager at Atronia Tailored Systems, are delighted about developing new Industry 4.0 products together.

Michael Blass, CEO of igus e-chain Systems, and Carlos Alexandre Ferreira, manager of Atronia Tailored Systems, celebrate the development of new Industry 4.0 products together. Source: igus GmbH

Atronia acquisition part of igus strategy

“By acquiring Atronia, we can harmonize the processes, systems, and teams of both companies even better, which will lead to synergies and efficiency gains in the long term,” said Michael Blass, CEO of e-chain Systems at igus. “This allows us to start series manufacturing for the Industry 4.0 era and make the products accessible to small and medium-sized companies with limited budgets and little experience.”

The collaboration between igus and Atronia resulted in the iSense EC.W sensor. Mounted on energy chain crossbars, it records the chain’s state and remaining service life.

Customers have given positive feedback about the sensor’s cost-effectiveness and intuitive design, said Atronia and igus. The partnership plans to jointly create more products in the future.

The post igus acquires Atronia, invests in smart plastics sensors for Industry 4.0 appeared first on The Robot Report.

]]>
https://www.therobotreport.com/igus-acquires-atronia-invests-in-smart-plastics-sensors-for-industry-4-0/feed/ 0
ETRI develops omnidirectional tactile sensors for robot hands https://www.therobotreport.com/etri-develops-omnidirectional-tactile-sensors-for-robot-hands/ https://www.therobotreport.com/etri-develops-omnidirectional-tactile-sensors-for-robot-hands/#respond Sat, 01 Jun 2024 14:00:08 +0000 https://www.therobotreport.com/?p=579256 ETRI is introducing a tactile sensor fits into a robotic finger with stiffness and shape similar to a human finger.

The post ETRI develops omnidirectional tactile sensors for robot hands appeared first on The Robot Report.

]]>
ETRI's robotic hand with omnidirectional tactile sensors.

ETRI’s robotic hand with omnidirectional tactile sensors. | Source: ETRI

Researchers from the Electronics and Telecommunications Research Institute (ETRI) are developing tactile sensors that detect pressure regardless of the direction it’s applied.

ETRI is introducing a tactile sensor that fits into a robotic finger with stiffness and a shape similar to a human finger. The robotic finger can flexibly handle everything from hard objects to deformable soft objects.

“This sensor technology advances the interaction between robots and humans by a significant step and lays the groundwork for robots to be more deeply integrated into our society and industries,” said Kim Hye-jin, a principal researcher at ETRI’s Intelligent Components and Sensors Research Section.

ETRI entered into a mutual cooperation agreement with Wonik Robotics to jointly develop the technology. Wonik will bring its expertise in robotic hands, including its experience developing the “Allegro Hand,” to the project. It previously supplied robotic hands to companies like Meta Platforms, Google, NVIDIA, Microsoft, and Boston Dynamics. The companies jointly exhibited related achievements at the Smart Factory & Automotive Industry Exhibition, held at COEX in Seoul.

Overcoming the technical limitations of pressure sensors

ETRI's robotic hand with tactile sensors also includes LED lights that change colors according to pressure changes.

ETRI’s robotic hand with tactile sensors also includes LED lights that change colors according to pressure changes. | Source: ETRI

ETRI’s research team says its technology can overcome the technical limitations of pressure sensors applied to existing robotic fingers. Previously, these sensors could show distorted signals depending on the direction in which the object was grasped. The team said it’s also highly rated for its performance and reliability. 

The sensor can detect pressure from various directions, even in a three-dimensional finger form, while also providing the flexibility to handle objects as naturally as a human hand, according to the team. These abilities make up the core of the technology. 

ETRI was able to advance this robotic finger technology by integrating omnidirectional pressure sensing with flexible air chamber tactile sensor technology, high-resolution signal processing circuit technology, and intelligent algorithm technology capable of real-time determination of an object’s stiffness. 

Additionally, the team enhanced the sensor’s precision in pressure detection by including LED lights that change colors according to pressure changes. This provides intuitive feedback to users. The team took this a step further and also integrated vibration detection and wireless communication capabilities to further strengthen communication between humans and robots. 

Unlike traditional sensors, which have sensors directly placed in the area where pressure is applied, these tactile sensors are not directly exposed to the area where pressure is applied. This allows for stable operation over long periods, even with continuous contact. The team says this improves the scalability of applications for robotic hands. 

Looking ahead to the future

ETRI says the development of intelligent robotic hands that can adjust its grip strength according to the stiffness of objects will bring about innovation in ultra-precise object recognition. The team expects the commercialization timeline to begin in the latter half of 2024. 

Sensitive and robust robotic fingers could help robots perform more complex and delicate tasks in various fields, including in the manufacturing and service sectors. ETRI expects that, through tactile sensor technology, robots will be able to manipulate a wide range of objects more precisely and significantly improve interaction with humans.

In the future, the research team plans to develop an entire robotic hand with these tactile sensors. Additionally, they aim to extend their development to a super-sensory hand that surpasses human sensory capabilities, including pressure temperature, humidity, light, and ultrasound sensors.

Through the team’s collaboration with Wonik, it has developed a robotic hand capable of recognizing objects through tactile sensors and flexibility controlling force. The research team plans to continue various studies to enable robots to handle objects and perceive the world as human hands do through sensors. 

The post ETRI develops omnidirectional tactile sensors for robot hands appeared first on The Robot Report.

]]>
https://www.therobotreport.com/etri-develops-omnidirectional-tactile-sensors-for-robot-hands/feed/ 0
The inside scoop on food manufacturing with Chef Robotics https://www.therobotreport.com/the-inside-scoop-on-food-manufacturing-with-chef-robotics/ https://www.therobotreport.com/the-inside-scoop-on-food-manufacturing-with-chef-robotics/#respond Fri, 31 May 2024 22:01:14 +0000 https://www.therobotreport.com/?p=579273 In this episode, we learn what's new in the world of food manufacturing with Chef Robotics.

The post The inside scoop on food manufacturing with Chef Robotics appeared first on The Robot Report.

]]>

In Episode 152 of The Robot Report Podcast, editors Steve Crowe and Mike Oitzman discuss the news of the week. Our featured guest on the show this week is Rajat Bhageria, founder and CEO of Chef Robotics.

Bhageria takes us through the inception of the company and how he has broken down the various workflows in the commercial kitchen. Chef Robotics is focused on food manufacturing, deploying automation to the tasks of filling ready-to-cook, prepared meals with individual food items.

The second feature on the show today is an interview hosted by Meaghan Ziemba, host of Mavens of Manufacturing and Joyce Sidopoulos, co-founder and chief of operations at MassRobotics. This interview occurred live at the 2024 Robotics Summit & Expo in Boston.

Ziemba and Sidopoulos discuss the Women in Robotics Breakfast at the Robotics Summit and the MassRobotics Jumpstart program. which supports local high school girls to discover a career in high tech and robotics. They also chat about how MassRobotics is supporting the regional robotics startup ecosystem.

Show timeline


SITE AD for the 2024 RoboBusiness registration now open.Register now.


In the news this week

      1. Robotics investments top $466M in April 2024
        • Robotics investments reached at least $466 million in April 2024, the result of 36 funding rounds. The April investments figure lagged recent months and was the smallest amount since November 2023. Last month’s investment total was significantly less than the trailing 12-month average of $1.1 billion.
        • Collaborative Robotics’ $100 million Series B round was April’s largest investment. As its name implies, the California-based firm is developing collaborative robots and enabling software for their use. China’s Rokae, a provider of collaborative and industrial robots, secured $70 million in April.
        • As with previous months, companies located in the U.S. and China received the largest funding amounts, $239 million and $115 million, respectively. Companies based in the U.S. (14) and China (9) also received the majority of the round.
      2. Sonair sensor launch
        • The Norwegian-based sensor company decloaked this week to announce the availability of evaluation kits for its new ultrasonic sensor.
        • Sonair is attempting to disintermediate the safety lidar market with a new obstacle-detecting sensor.
      3. Tangram Vision creates lidar comparison tool
        • If you’re searching for a lidar sensor to add to your robot, Tangram Vision wants to make your evaluation process simpler. The startup, which is building software and hardware for robotic perception, launched an interactive tool called “Spinning LiDAR Visualizer” that lets users compare spinning lidar models.
        • Users can compare 28 sensors from leading manufacturers such as Hesai, Ouster, Quanergy, RoboSense, and Velodyne. The visualizer allows them to select one or two sensors to analyze and compare maximum range, range at 10% reflectivity, angular resolution, and field of view. You can click and drag for different viewpoints, embed the tool somewhere else, or even modify it on GitLab.

The post The inside scoop on food manufacturing with Chef Robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/the-inside-scoop-on-food-manufacturing-with-chef-robotics/feed/ 0
Sonair decloaks to launch 3D ultrasonic safety sensor for AMRs https://www.therobotreport.com/sonair-decloaks-launches-new-3d-ultrasonic-safety-sensor-amrs/ https://www.therobotreport.com/sonair-decloaks-launches-new-3d-ultrasonic-safety-sensor-amrs/#respond Wed, 29 May 2024 10:00:00 +0000 https://www.therobotreport.com/?p=579197 Sonair has announced the availability of evaluation kits for a new ultrasonic 3D safety sensor to begin shipping later this year.

The post Sonair decloaks to launch 3D ultrasonic safety sensor for AMRs appeared first on The Robot Report.

]]>
illustration of a warehouse scene with an amr in the foreground identifying obstacles in front of the robot.

The new Sonair 3D ultrasonic sensor is designed for obstacle detection for mobile robots. | Credit: Sonair

Sonair AS officially launched today, coming out of stealth and unveiling its new 3D ultrasonic sensor for autonomous mobile robots and automated guided vehicles.

The patented technology behind the device uses ultrasonic sensors rather than light or lasers for obstacle detection. Sonair described the new sensor as a replacement for expensive safety lidar sensors. The Oslo, Norway-based company said it anticipates that the device will sell for 50% to 80% of the cost of a safety lidar unit.

The imaging method is called beamforming. It’s the backbone of processing for sonar and radar, as well as in medical ultrasound imaging. Sonair has combined wavelength-matched transducers with cutting-edge software for beamforming and object-recognition algorithms. This innovation makes 3D spatial information available simply by transmitting sound and listening.

By using 3D ultrasonic imaging in robotics applications, Sonair said it delivers safe navigation, miniaturization, cost efficiency, and low power consumption compared with other 3D imaging methods.

“Today, as we step out of stealth mode, we are excited to share our vision and contributions towards a future where humans and machines can coexist safely and productively,” said Knut Sandven, CEO of Sonair. “Our cutting-edge ultrasound technology not only detects obstacles in three dimensions, but does so with unprecedented accuracy and at a fraction of the cost of current sensors.”

Founded in 2022, Sonair specializes in ultrasonic sensors that reduce the financial burden associated with automated guided vehicles (AGVs) and autonomous mobile robots (AMRs). By using patented technology developed at SINTEF’s MiNaLab in Europe, the company claimed that its sensors can enhance a robot’s vision from 2D to 3D, offering a significant improvement over traditional lidar and camera systems.

“Our sensors are designed to end the era of expensive laser-based sensors,” said Sandven. “With our evaluation kit releasing this summer, we encourage innovators and industry leaders to explore the potential of our technology in transforming machine perception.”

3D illustration of sensor cloud data.

The Sonair 3D sensor provides a three-dimensional point cloud of the environment. This image shows the point cloud overlaid on a camera image. | Credit: Sonair

Sonair provides 3D data

The Sonair 3D ultrasonic sensor is designed to enable AMRs to detect the distance and direction of all objects within a 180×180 field of view, up to a 5 m (16.5 ft.) range, with a resolution of 1 cm (0.39 in.). Note that these sensors are designed for obstacle detection and avoidance, not for perception and guidance.

The primary use case for acoustic detection and ranging (ADAR) is to inform the robot controller when an object enters any of the safety zones configured around the robot. Today, this function is handled primarily by safety-rated lidars that only offer a two-dimensional and planar view of the world around the robot.

The advantage of the Sonair sensor is both lower cost and the use of 3D+ information, said Sonair. The “plus” comes from the capability of the sensor to track the trajectory of an object entering the safety envelope.

front view and isometric view of the evaluation unit.

The evaluation units for the Sonair 3D sensor are available for potential partners to test. | Credit: Sonair

Early customers happy with performance

Several companies are already exploring Sonair’s sensor technology. One is Solwr, a Norwegian company that has developed a combination of robotics and software to automate picking and sorting processes in warehouse and retail environments.

“We are impressed by the technology and the unique opportunity Sonair gives us to offer mobile picking robots with next-generation operational safety solutions,” said Olivier Roulet-Dubonnet, chief technology officer of Solwr. “We are really excited to start testing the Sonair 3D ultrasonic sensor on our robot in warehouses.”

Sonair has opened a waiting list to get evaluation units of its sensors into the hands of AMR and AGV developers. A limited number of pilot units will begin shipping in a few weeks, and the company expects to begin full production later this year.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


The post Sonair decloaks to launch 3D ultrasonic safety sensor for AMRs appeared first on The Robot Report.

]]>
https://www.therobotreport.com/sonair-decloaks-launches-new-3d-ultrasonic-safety-sensor-amrs/feed/ 0
Tangram Vision creates LiDAR comparison tool https://www.therobotreport.com/tangram-vision-creates-lidar-comparison-tool/ https://www.therobotreport.com/tangram-vision-creates-lidar-comparison-tool/#respond Mon, 27 May 2024 14:30:43 +0000 https://www.therobotreport.com/?p=579182 Users can compare maximum range, range at 10% reflectivity, angular resolution, and field of view for 28 LiDAR sensors from Hesai, Ouster, Quanergy, RoboSense, and Velodyne.

The post Tangram Vision creates LiDAR comparison tool appeared first on The Robot Report.

]]>
a screenshot of Tangram Vision's new LiDAR sensor comparison tool

A screenshot of Tangram Vision’s LiDAR comparison tool

Light Detection and Ranging (LiDAR) is a sensing method that uses light in the form of a pulsed laser to measure distance. LiDAR is useful in autonomy for a number of key functions such as obstacle avoidance, object detection, and object identification. The 3D data from spinning LiDAR is often a key input into navigational systems for autonomous vehicles and robots.

If you’re searching for a LiDAR sensor to add to your robot, Tangram Vision wants to make your evaluation process simpler. Tangram Vision, a startup building software and hardware for robotic perception, just launched an interactive tool called “Spinning LiDAR Visualizer” that lets you compare spinning LiDAR models.

Users can compare 28 LiDAR models from leading manufacturers such as Hesai, Ouster, Quanergy, RoboSense, and Velodyne. The visualizer allows you to select 1-2 LiDAR sensors to analyze and compare maximum range, range at 10% reflectivity, angular resolution, and field of view. You can click and drag for different viewpoints, embed the tool somewhere else, or even modify it on GitLab.

“From day one, our goal for Tangram Vision has been to build the tools and resources we always wished we’d had as perception engineers,” said Tangram Vision COO & Co-founder Adam Rodnitzky. “Some of that comes in the form of products like our calibration tools, and some in the form of open-source projects, like this new LiDAR visualizer and comparison tool.”

In 2021, Tangram Vision launched an interactive depth sensor visualizer. Rodnitzky said the response from users was positive and that many asked for a similar tool for LiDAR sensors. Rodnitzky said the team has compiled specs for a few dozen solid state directional LiDARs that will either be added to this existing visualizer or to a new one in the future.

“Generally speaking, there is still a dearth of good resources for perception engineers. For many tasks, a lot of time is taken up by research, or trial and error, or simply forging new paths that have yet to be made,” Rodnitzky said. “We’re just trying to do our part to build up a bank of resources that let engineers focus more on productive work, and less on rote tasks that should already be solved.”

Tangram Vision also recently released its 2024 Perception Industry Market Map that offers a look at 100+ companies developing hardware and software for robots, autonomous vehicles and more.

Late in 2023, Tangram Vision announced its first foray into perception hardware with its HiFi 3D depth sensor. HiFi combines a 136° DFOV 2.2MP global shutter IR stereo pair with a 136° DFOV 2MP RGB camera and dual active projectors for high-quality depth maps. The company said the onboard deep-learning matrix multiply accelerator works at 8 TOPS, and is coupled with 16Gb of onboard memory. This allows developers to run machine learning training and processes at the edge, training models with continually calibrated, high-accuracy depth data.

The post Tangram Vision creates LiDAR comparison tool appeared first on The Robot Report.

]]>
https://www.therobotreport.com/tangram-vision-creates-lidar-comparison-tool/feed/ 0
Stanford researcher discusses UMI gripper and diffusion AI models https://www.therobotreport.com/interview-with-chung-chi-about-the-umi-gripper-and-diffusion-ai-models/ https://www.therobotreport.com/interview-with-chung-chi-about-the-umi-gripper-and-diffusion-ai-models/#respond Sat, 25 May 2024 14:30:46 +0000 https://www.therobotreport.com/?p=579086 Stanford Ph.D. researcher Cheng Chi discusses the development of the UMI gripper and the use of diffusion AI models for robotics.

The post Stanford researcher discusses UMI gripper and diffusion AI models appeared first on The Robot Report.

]]>

The Robot Report recently spoke with Ph.D. student Cheng Chi about his research at Stanford University and recent publications about using diffusion AI models for robotics applications. He also discussed the recent universal manipulation interface, or UMI gripper, project, which demonstrates the capabilities of diffusion model robotics.

The UMI gripper was part of his Ph.D. thesis work, and he has open-sourced the gripper design and all of the code so that others can continue to help evolve the AI diffusion policy work.

AI innovation accelerates

How did you get your start in robotics?

headshot of Cheng Chi.

Stanford researcher Cheng Chi. | Credit: Huy Ha

I worked in the robotics industry for a while, starting at the autonomous vehicle company Nuro, where I was doing localization and mapping.

And then I applied for my Ph.D. program and ended up with my advisor Shuran Song. We were both at Columbia University when I started my Ph.D., and then last year, she moved to Stanford to become full-time faculty, and I moved [to Stanford] with her.

For my Ph.D. research, I started as a classical robotics researcher, and I started working with machine learning, specifically for perception. Then in early 2022, diffusion models started to work for image generation, that’s when DALL-E 2 came out, and that’s also when Stable Diffusion came out.

I realized the specific ways which diffusion models could be formulated to solve a couple of really big problems for robotics, in terms of end-to-end learning and in the actual representation for robotics.

So, I wrote one of the first papers that brought the diffusion model into robotics, which is called diffusion policy. That’s my paper for my previous project before the UMI project. And I think that’s the foundation of why the UMI gripper works. There’s a paradigm shift happening, my project was one of them, but there are also other robotics research projects that are also starting to work.

A lot has changed in the past few years. Is artificial intelligence innovation is accelerating?

Yes, exactly. I experienced it firsthand in academia. Imitation learning was the dumbest thing possible you could do for machine learning with robotics. It’s like, you teleoperate the robot to collect data, the data is paired with images and the corresponding actions.

In class, we’re taught that people proved that in this paradigm of imitation learning or behavior, cloning doesn’t work. People proved that errors grow exponentially. And that’s why you need reinforcement learning and all the other methods that can address these limitations.

But fortunately, I wasn’t paying too much attention in class. So I just went to the lab and tried it, and it worked surprisingly well. I wrote the code, I applied the diffusion model to this and for my first task; it just worked. I said, “That’s too easy. That’s not worth a paper.”

I kept adding more tasks like online benchmarks, trying to break the algorithm so that I could find a smart angle that I could improve on this dumb idea that would give me a paper, but I just kept adding more and more things, and it just refused to break.

So there are simulation benchmarks online. I used four different benchmarks and just tried to find an angle to break it so that I could write a better paper, but it just didn’t break. Our baseline performance was 50% to 60%. And after applying the diffusion model to that, it was like 95%. So it was a jump in terms of these. And that’s the moment I realized, maybe there’s something big happening here.

UR5 cobot push a "T" around a table.

The first diffusion policy research at Columbia was to push a T into position on a table. | Credit: Cheng Chi

How did those findings lead to published research?

That summer, I interned at Toyota Research Institute, and that’s where I started doing real-world experiments using a UR5 [cobot] to push a block into a location. It turned out that this worked really well on the first try.

Normally, you need a lot of tuning to get something to work. But this was different. When I tried to perturb the system, it just kept pushing it back to its original place.

And so that paper got published, and I think that’s my proudest work, I made the paper open-source, and I open-sourced all the code because the results were so good, I was worried that people were not going to believe it. As it turned out, it’s not a coincidence, and other people can reproduce my results and also get very good performance.

I realized that now there’s a paradigm shift. Before [this UMI Gripper research], I needed to engineer a separate perception system, planning system, and then a control system. But now I can combine all of them with a single neural network.

The most important thing is that it’s agnostic to tasks. With the same robot, I can just collect a different data set and train a model with a different data set, and it will just do the different tasks.

Obviously, collecting the data set part is painful, as I need to do it 100 to 300 times for one environment to get it to work. But in actuality, it’s maybe one afternoon’s worth of work. Compared to tuning a sim-to-real transfer algorithm takes me a few months, so this is a big improvement.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


UMI Gripper training ‘all about the data’

When you’re training the system for the UMI Gripper, you’re just using the vision feedback and nothing else?

Just the cameras and the end effector pose of the robot — that’s it. We had two cameras: one side camera that was mounted onto the table, and the other one on the wrist.

That was the original algorithm at the time, and I could change to another task and use the same algorithm, and it would just work. This was a big, big difference. Previously, we could only afford one or two tasks per paper because it was so time-consuming to set up a new task.

But with this paradigm, I can pump out a new task in a few days. It’s a really big difference. That’s also the moment I realized that the key trend is that it’s all about data now. I realized after training more tasks, that my code hadn’t been changed for a few months.

The only thing that changed was the data, and whenever the robot doesn’t work, it’s not the code, it’s the data. So when I just add more data, it works better.

And that prompted me to think, that we are into this paradigm of other AI fields as well. For example, large language models and vision models started with a small data regime in 2015, but now with a huge amount of internet data, it works like magic.

The algorithm doesn’t change that much. The only thing that changed is the scale of training, and maybe the size of the models, and makes me feel like maybe robotics is about to enter that that regime soon.

two UR cobots fold a shirt using UMI gripper.

Two UR cobots equipped with UMI grippers demonstrate the folding of a shirt. | Credit: Cheng Chi video

Can these different AI models be stacked like Lego building blocks to build more sophisticated systems?

I believe in big models, but I think they might not be the same thing as you imagine, like Lego blocks. I suspect that the way you build AI for robotics will be that you take whatever tasks you want to do, you collect a whole bunch of data for the task, run that through a model, and then you get something you can use.

If you have a whole bunch of these different types of data sets, you can combine them, to train an even bigger model. You can call that a foundation model, and you can adapt it to whatever use case. You’re using data, not building blocks, and not code. That’s my expectation of how this will evolve.

But simultaneously, there’s a there’s a problem here. I think the robotics industry was tailored toward the assumption that robots are precise, repeatable, and predictable. But they’re not adaptable. So the entire robotics industry is geared towards vertical end-use cases optimized for these properties.

Whereas robots powered by AI will have different sets of properties, and they won’t be good at being precise. They won’t be good at being reliable, they won’t be good at being repeatable. But they will be good at generalizing to unseen environments. So you need to find specific use cases where it’s okay if you fail maybe 0.1% of the time.

Safety versus generalization

Robots in industry must be safe 100% of the time. What do you think the solution is to this requirement?

I think if you want to deploy robots in use cases where safety is critical, you either need to have a classical system or a shell that protects the AI system so that it guarantees that when something bad happens, at least there’s a worst-case scenario to make sure that something bad doesn’t actually happen.

Or you design the hardware such that the hardware is [inherently] safe. Hardware is simple. Industrial robots for example don’t rely that much on perception. They have expensive motors, gearboxes, and harmonic drives to make a really precise and very stiff mechanism.

When you have a robot with a camera, it is very easy to implement vision servoing and make adjustments for imprecise robots. So robots don’t have to be precise anymore. Compliance can be built into the robot mechanism itself, and this can make it safer. But all of this depends on finding the verticals and use cases where these properties are acceptable.

The post Stanford researcher discusses UMI gripper and diffusion AI models appeared first on The Robot Report.

]]>
https://www.therobotreport.com/interview-with-chung-chi-about-the-umi-gripper-and-diffusion-ai-models/feed/ 0
Lumotive and Hokuyo release 3D lidar sensor with solid-state beam steering https://www.therobotreport.com/lumotive-and-hokuyo-release-3d-lidar-sensor-with-solid-state-beam-steering/ https://www.therobotreport.com/lumotive-and-hokuyo-release-3d-lidar-sensor-with-solid-state-beam-steering/#respond Wed, 22 May 2024 13:06:38 +0000 https://www.therobotreport.com/?p=579148 The new sensor from Lumotive uses the latest beamforming technology for industrial automation and service robotics.

The post Lumotive and Hokuyo release 3D lidar sensor with solid-state beam steering appeared first on The Robot Report.

]]>
Lumotive and Hoyuko's new YLM-10LX 3D lidar sensor uses patented LCM optical beamforming for robotics applications. Source: Lumotive

Hokuyo’s YLM-10LX 3D uses Lumotive’s patented LCM optical beamforming for robotics applications. Source: Lumotive

Perception technology continues to evolve for autonomous systems, becoming more robust and compact. Lumotive and Hokuyo Automatic Co. today announced the commercial release of the YLM-10LX 3D lidar sensor, which they claimed “represents a major leap forward in applying solid-state, programmable optics to transform 3D sensing.”

The product uses Lumotive’s Light Control Metasurface (LCM) optical beamforming technology and is designed for industrial automation and service robotics applications.

“We are thrilled to see our LM10 chip at the heart of Hokuyo’s new YLM-10LX sensor, the first of our customers’ products to begin deploying our revolutionary beam-steering technology into the market,” stated Dr. Axel Fuchs, vice president of business development at Lumotive.

“This product launch highlights the immense potential of our programmable optics in industrial robotics and beyond,” he added. “Together with Hokuyo, we look forward to continuing to redefine what’s possible in 3D sensing.”

Lumotive LCM offers stable lidar perception

Lumotive said its award-winning optical semiconductors enable advanced sensing in next-generation consumer, mobility, and industrial automation products such as mobile devices, autonomous vehicles, and robots. The Redmond, Wash.-based company said its patented LLCM chips “deliver an unparalleled combination of high performance, exceptional reliability, and low cost — all in a tiny, easily integrated solution.”

The LCM technology uses dynamic metasurfaces to manipulate and direct light “in previously unachievable ways,” said Lumotive. This eliminates the need for the bulky, expensive, and fragile mechanical moving parts found in traditional lidar systems, it asserted.

“As a true solid-state beam-steering component for lidar, LCM chips enable unparalleled stability and accuracy in 3D object recognition and distance measurement,” said the company. “[The technology] effectively handles multi-path interference, which is crucial for industrial environments where consistent performance and safety are paramount.”

Lumotive said the LM10 LCM allows sensor makers such as Hokuyo to rapidly integrate compact, adaptive programmable optics into their products. It manufactures the LM10 like its other products, following well-established and scalable silicon fabrication techniques. The company said this cuts costs through economies of scale, making solid-state lidar economically feasible for widespread adoption in a broad spectrum of industries.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Software-defined sensing provides flexibility, says Hokuyo

Hokuyo claimed that the new sensor “is the first of its kind in the lidar industry, achieving superior range and field of view (FOV) compared to any other solid-state solution on the market by integrating beam-steering with Lumotive’s LM10 chip.”

In addition, the software-defined scanning capabilities of LCM beam steering allow users to adjust performance parameters such as the sensor’s resolution, detection range, and frame rate, said the Osaka, Japan-based company. They can program and use multiple FOVs simultaneously, adapting to application needs and changing conditions, indoors and outdoors.

Hokuyo said the commercial release of the YLM-10LX sensor marks another milestone in its continued investment in its long-term, strategic collaboration with Lumotive.

“With the industrial sectors increasingly demanding high-performance, reliable lidar systems that also have the flexibility to address multiple applications, our continued partnership with Lumotive allows us to harness the incredible potential of LCM beam steering and to deliver innovative solutions that meet the evolving needs of our customers,” said Chiai Tabata, product and marketing lead at Hokuyo.

Founded in 1946, Hokuyo Automatic offers a range of industrial sensor products for the factory automation, logistics automation, and process automation industries. The company‘s products include collision-avoidance sensors, safety laser scanner and obstacle-detection sensors, optical data transmission devices, laser rangefinders (lidar), and hot-metal detectors. It also provides product distribution and support services.

The post Lumotive and Hokuyo release 3D lidar sensor with solid-state beam steering appeared first on The Robot Report.

]]>
https://www.therobotreport.com/lumotive-and-hokuyo-release-3d-lidar-sensor-with-solid-state-beam-steering/feed/ 0
Forcen closes funding to develop ‘superhuman’ robotic manipulation https://www.therobotreport.com/forcen-closes-funding-to-develop-superhuman-robotic-manipulation/ https://www.therobotreport.com/forcen-closes-funding-to-develop-superhuman-robotic-manipulation/#respond Sun, 28 Apr 2024 12:30:18 +0000 https://www.therobotreport.com/?p=578877 Forcen is offering customized and off-the-shelf sensors to aid robotic manipulation in complex environments.

The post Forcen closes funding to develop ‘superhuman’ robotic manipulation appeared first on The Robot Report.

]]>
Forcen has raised funding to scale production of its force/torque sensors.

Forcen says its technology will help robotic manipulation advance as vision has. Source: Forcen

Forcen last week said it has closed a funding round of CAD $8.35 million ($6.1 million U.S.). The Toronto-based company plans to use the investment to scale up production to support more customers and to continue developing its force/torque sensing technology and edge intelligence.

“We’ve been focused on delivering custom solutions showcasing our world-first technology with world-class quality … and we’re excited for our customers to announce the robots they’ve been working on with our technology,” stated Robert Brooks, founder and CEO of Forcen. “Providing custom solutions has limited the number of customers we take on, but now we’re working to change that.”

Founded in 2015, Forcen said its goal is to enable businesses to easily deploy “(super)human” robotic manipulation in complex and unstructured applications. The company added that its technology is already moving into production with customers in surgical, logistics, humanoid, and space robotics.

Forcen offers two paths to robot manipulation

Forcen said its new customizable offering and off-the-shelf development kits will accelerate development for current customers and help new ones adopt its technology.

The rapidly customizable offering will use generative design and standard subassemblies, noted the company. This will allow customers to select the size, sensing range/sensitivity, overload protection, mounting bolt pattern, and connector type/location.

By fulfilling orders in as little as four to six weeks, Forcen claimed that it can replace the traditional lengthy catalog of sensors, so customers can get exactly what they need for their unique applications.

The company will launch its off-the-shelf development kits later this year. They will cover three degree-of-freedom (DoF) and 6 DoF force/torque sensors, as well as Forcen’s cross-roller, bearing-free 3 DoF joint torque sensor and 3 DoF gripper finger.

Forcen's off-the-shelf development kits.

Off-the-shelf development kits will support different degrees of freedom. Source: Forcen

Force/torque sensors designed for complex applications

Complex and less-structured robotics applications are challenging for conventional force/torque sensing technologies because of the risk of repeated impact/overload, wide temperature ranges/changes, and extreme constraints on size and weight, explained Forcen. These applications are becoming increasingly common in surgical, logistics, agricultural/food, and underwater robotics.

Forcen added that its “full-stack” sensing systems are designed for such applications using three core proprietary technologies:

  • ForceFilm — A monolithic thin-film transducer enabling sensing systems that are lighter, thinner, more stable across both drift and temperature, the company said. It is especially scalable for multi-dimensional sensing, Forcen said.
  • Dedicated Overload — A protection structure that acts as a 6 DoF hard stop. The company said it allows sensitivity and overload protection to be designed separately and enables durable use of the overload structure for thousands of overload events while still achieving millions of sensing cycles.
  • Synap — Forcen’s onboard edge intelligence comes factory compensated/calibrated and can connect to any standard digital bus (USB, CAN, Ethernet, EtherCAT). This can “create a full-stack force/torque sensing solution that is truly plug-and-play with a maintenance/calibration-free operation.
Forcen sensors include three proprietary technologies.

New offerings include features to support demanding robotics applications. Source: Forcen

Learn about Forcen at the Robotics Summit

Brightspark Ventures and BDC Capital’s Deep Tech Venture Fund co-led Forcen’s funding round, with participation from Garage Capital and MaRS IAF, as well as returning investors including EmergingVC.

“Robotic vision has undergone a revolution over the past decade and is continuing to accelerate with new AI approaches,” said Mark Skapinker, co-founder and partner at Brightspark Ventures. “We expect robotic manipulation to quickly follow in the footsteps of robotic vision and Forcen’s technology to be a key enabler of ubiquitous human-level robotic manipulation.”

Forcen is returning to the Robotics Summit & Expo this week. It will have live demonstrations of its latest technology in Booth 113 at the Boston Convention and Exhibition Center. 

CEO Brooks will be talking on May 1 at 4:15 p.m. EDT about “Designing (Super)Human-Level Haptic Sensing for Surgical Robotics.” Registration is now open for the event, which is co-located with DeviceTalks Boston and the Digital Transformation Forum.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


The post Forcen closes funding to develop ‘superhuman’ robotic manipulation appeared first on The Robot Report.

]]>
https://www.therobotreport.com/forcen-closes-funding-to-develop-superhuman-robotic-manipulation/feed/ 0
Bota Systems launches PixONE force-torque sensors for robotics https://www.therobotreport.com/bota-systems-launches-pixone-force-torque-sensors-for-robotics/ https://www.therobotreport.com/bota-systems-launches-pixone-force-torque-sensors-for-robotics/#respond Tue, 23 Apr 2024 17:09:34 +0000 https://www.therobotreport.com/?p=578822 Bota Systems says it designed its PixONE force-torque sensors to keep cabling inside robotic arms.

The post Bota Systems launches PixONE force-torque sensors for robotics appeared first on The Robot Report.

]]>
Bota Systems' new PixONE force-torque sensor on an industrial robot.

The new PixONE force-torque sensor on an industrial robot. | Source: Bota Systems

Bota Systems AG today released PixONE, a sensor that brings together high-performance electronics with a compact, lightweight design. Founded in 2020 as an ETH Zurch spin-off, the company specializes in multi-axis force-torque sensors. 

Zurich-based Bota said it designed its latest sensors for “seamless integration into robotic systems.” PixONE features a through-hole architecture facilitating internal cable routing to enhance robot agility and safety, claimed the company.

The sensor’s hollow shaft design makes it easier for users to connect a robot’s arm and end-of-arm tooling (EOT or EOAT) while maintaining the integrity of internal cable routing, said Bota Systems. It added that this design can be particularly helpful because many robot arm manufacturers are moving toward internal routing to eliminate cable tangles and motion restrictions. 

“Our objective is to equip robots with the sense of touch, making them not only safer and more user-friendly, but also more collaborative,” stated Klajd Lika, co-founder and CEO Bota Systems. “PixONE is an advanced, OEM-ready sensing solution that enables robot developers and integrators to effortlessly enhance any robot in development with minimal integration effort.”

PixONE minimalist design is lightweight

PixONE has a minimalistic two-piece design. Bota Systems said this simplifies the assembly and significantly reduces the sensor’s weight, making it 30% lighter than comparable sensors on the market. This is critical for dynamic systems such as fast-moving robots, where excess weight can impede performance and operational efficiency, it said. 

Bota Systems offers PixONE in various models with an external diameter starting at 2.36 in. (60 mm) and a through-hole diameter of 0.59 in. (15 mm). The sensors include an inertial measurement unit (IMU) and have a IP67 waterproof rating. The company said these features make it suitable for a wide range of operational environments. 

“The PixONE offers a higher torque-to-force ratio than comparative sensors with integrated electronics, which gives integrators more freedom in EOT design, especially with larger tools,” said Ilias Patsiaouras, co-founder and chief technology officer of Bota Systems. “PixONE elevates the sensor integration by offering internal connection and cable passthrough, making it ideal for a wide spectrum of robotic applications, ranging from industrial to medical.”

The PixONE configurations can support payloads up to 551 lb. (250 kg). Bota said it maintained a uniform interface across all models to facilitate rapid integration.

The PixONE’s design also minimizes external connections and component count, enhancing system reliability, according to the company. PixONE uses EtherCAT technology for high-speed data communication and supports Power over Ethernet (PoE).

See Bota Systems at the Robotics Summit & Expo

Bota Systems is an official distribution and integration partner of Universal Robots and Mecademic. In October 2023, it added NEXT Robotics to its distributor network.

That same month, Bota Systems raised $2.5 million in seed funding. The company said it plans to use the funding to grow its team to address increasing demand by leading research labs and manufacturing companies. It also plans to accelerate its product roadmap.

To learn more about Bota Systems, visit the company at Booth 315 at the Robotics Summit & Expo, which will be on May 1 and 2 in Boston.

“Our vision is to equip robots with the sense of touch, making them not only safer and more user-friendly, but also more collaborative,” Klajd Lika, co-founder and CEO of Bota Systems, told The Robot Report. “We look forward to the Robotics Summit & Expo because it brings together the visionaries and brightest minds of the industry — this interaction is valuable for us to shape the development of our next generation of innovative sensors.” 

This will be the largest Robotics Summit ever. It will include more than 200 exhibitors, various networking opportunities, a women in robotics breakfast, a career fair, an engineering theater, a startup showcase, and more. Registration is now open for the event.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


The post Bota Systems launches PixONE force-torque sensors for robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/bota-systems-launches-pixone-force-torque-sensors-for-robotics/feed/ 0
Advanced Navigation’s Hydrus explores shipwrecks in the Indian Ocean https://www.therobotreport.com/advanced-navigations-hydrus-explores-shipwrecks-indian-ocean/ https://www.therobotreport.com/advanced-navigations-hydrus-explores-shipwrecks-indian-ocean/#respond Sun, 21 Apr 2024 12:30:31 +0000 https://www.therobotreport.com/?p=578771 Advanced Navigation recently sent Hydrus to the depths of the Rottnest ship graveyard, located off the coast of Western Australia. 

The post Advanced Navigation’s Hydrus explores shipwrecks in the Indian Ocean appeared first on The Robot Report.

]]>
Advanced Navigation's Hydrus micro autonomous underwater vehicle (AUV) deployed.

Advanced Navigation’s Hydrus micro autonomous underwater vehicle (AUV) deployed. | Source: Advanced Navigation

Advanced Navigation is bringing humans closer to the ocean with Hydrus, a relatively small underwater drone. The company recently sent Hydrus to the depths of the Rottnest ship graveyard, located in the Indian Ocean and just off the coast of Western Australia. 

The Sydney, Australia-based developer of AI robotics and navigation technology said that upon seeing the gathered data, the team discovered a 210-ft. (64-m) shipwreck scattered across the sea floor. This means the wreck was more than twice the size of a blue whale. 

“We’ve found through all of our testing that Hydrus is very reliable, and it will complete its mission and come to the surface or come to its designated return point,” Alec McGregor, Advanced Navigation’s photogrammetry specialist, told The Robot Report. “And then you can just scoop it up with a net from the side of the boat.”

Robot can brave the ocean’s unexplored depths

Humans have only explored and charted 24% of the ocean, according to Advanced Navigation. The unexplored parts are home to more than 3 million undiscovered shipwrecks, and 1,819 recorded wrecks are lying off Western Australia’s shore alone.

These shipwrecks can hold keys to our understanding of past culture, history, and science, said the company.

The Rottnest graveyard is a particularly dense area for these abandoned ships. Beginning in the 1900s, the area became a burial ground for ships, naval vessels, aircraft, and secretive submarines. A majority of these wrecks haven’t been discovered because the depth ranges from 164 to 656 ft. (50 to 200 m). 

Traditionally, there are two ways of gathering information from the deep sea, explained McGregor. The first is divers, who have to be specially trained to reach the depths Advanced Navigation is interested in studying. 

“Some of the wrecks that we’ve been looking at are in very deep water, so 60 m [196.8 ft.] for this particular wreck, which is outside of the recreational diving limit,” McGregor said. “So, you actually have to go into tech diving.”

“And when you go deeper with all of this extra equipment, it tends to just increase the risks associated with going to depth,” he said. “So, you need to have special training, you need to have support vessels, and you also have to be down in the water for a long period of time.”

The second option is to use remotely operated vehicles (ROVs) or autonomous underwater vehicles (AUVs). While this method doesn’t involve putting people at risk, it can still be expensive. 

“Some of the drawbacks with using traditional methods include having to have big support vessels,” McGregor said. “And getting the actual ROVs in and out of the water sometimes requires a crane, whereas with the Hydrus, you can just chuck it off the side of the boat.”

“So, with Hydrus, you’re able to reduce the costs of operation,” he added. “You’re also able to get underwater data super easily and super quickly by just chucking a Hydrus off the boat. It can be operated with one person.”

Advanced Navigation uses ‘wet electronics’

One of the biggest challenges with underwater robotics, McGregor said, is keeping important electronics dry. Conventional ROVs do this with pressure chambers. 

“Traditional ROVs have big chambers which basically keep all the electronics dry,” he noted. “But from a mechanical point of view, if you want to go deeper, you need to have thicker walls so that they can resist the pressure at depth.”

“If you need thicker walls, that increases the weight of the robot,” said McGregor. “And if you increase the weight, but you still want the robot to be buoyant, you have to increase the size. It’s just this kind of spiral of increasing the size to increase the buoyancy.”

“What we’ve managed to do with Hydrus is we have designed pressure-tolerant electronics, and we use a method of actually having what we call ‘wet electronics,'” McGregor said. “This involves basically potting the electronics in a plastic material. And we don’t use it to keep the structural integrity of the robot. So we don’t need a pressure vessel because we’ve managed to protect our electronics that way.” 

Once it’s underwater, Hydrus operates fully autonomously. Unlike traditional ROVs, the system doesn’t require a tether to navigate underwater, and the Advanced Navigation team has limited real-time communication capabilities. 

“We do have very limited communication with Hydrus through acoustic communications,” McGregor said. “The issue with acoustic communications is that there’s not a lot of data that can be transferred. We can get data such as the position of Hydrus, and we can also send simple commands such as ‘abort mission’ or ‘hold position’ or ‘pause mission,’ but we can’t physically control it.”


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Hydrus provides high-resolution data

While Hydrus has impressive autonomous capabilities, it doesn’t find wrecks all on its own. In this case, McGregor said, Advanced Navigation worked closely with the Western Australian (WA) Museum to find the wreck.

The museum gave the company a rough idea of where a shipwreck could be. Then the team sent Hydrus on a reconnaissance mission to determine the wreck’s exact location. 

“When we got Hydrus back on board, we were able to offload all the data and reconstruct the mission based on the images and from that, we were then able to see where the shipwreck was,” McGregor said. “One of the good things about Hydrus is that we can actually get geo-referenced data onto the water with auxiliary systems that we have on the boat.”

Hydrus gathered 4K geo-referenced imagery and video footage. Curtin University HIVE, which specializes in shipwreck photogrammetry, used this data to rebuild a high-resolution 3D digital twin of the wreck. Ross Anderson, a curator at the WA Museum, closely examined the digital twin. 

Anderson found that the wreck was an over 100-year-old coal hulk from Fremantle Port’s bygone days. Historically, these old iron ships were used to service steamships in Western Australia. 

In the future, the team is interested in exploring other shipwrecks, like the SS Koombana, an ultra-luxury passenger ship. The ship ferried more than 150 passengers before it vanished into a cyclone in 1912.

However, Advanced Navigation isn’t just interested in gaining information from shipwrecks. 

“Another thing we’re doing with a lot of this data is actually coral reef monitoring. So we’re making 3D reconstructions of coral reefs, and we’re working with quite a few customers to do this,” McGregor said.  

Hydrus reduced the surveying costs for this particular mission by up to 75%, according to the company. This enabled the team to conduct more frequent and extensive surveying of the wreck in a shorter period of time. 

The post Advanced Navigation’s Hydrus explores shipwrecks in the Indian Ocean appeared first on The Robot Report.

]]>
https://www.therobotreport.com/advanced-navigations-hydrus-explores-shipwrecks-indian-ocean/feed/ 0
Bota Systems to showcase its latest sensors at Robotics Summit https://www.therobotreport.com/bota-systems-to-showcase-its-latest-sensors-at-robotics-summit/ https://www.therobotreport.com/bota-systems-to-showcase-its-latest-sensors-at-robotics-summit/#respond Thu, 18 Apr 2024 18:48:10 +0000 https://www.therobotreport.com/?p=578759 Bota Systems will be at Booth 315 on the show floor at the Robotics Summit & Expo, which takes place on May 1 and 2, 2024. 

The post Bota Systems to showcase its latest sensors at Robotics Summit appeared first on The Robot Report.

]]>
Bota Systems.

Bota offers sensor solutions intended to allow robots to work and move safely. | Source: Bota Systems

Bota Systems will exhibit its recently unveiled sensors featuring a through-hole flange design and enhanced cable management at the Robotics Summit & Expo. The company can be found in Booth 315 on the event’s show floor.

“During the Robotics Summit, we will showcase our complete range of sensors at our booth, and we invite you to experience these sensors in action,” Marco Martinaglia, vice president of marketing at Bota Systems, told The Robot Report. “You’ll see a live demonstration of inertia compensation with a handheld device, and a Mecademic Robot equipped with our cutting-edge MiniONE Pro six-axis sensor will perform automated assembly and deburring tasks.”

The company said it designed its latest sensors for humanoids, industrial, and medical robots. It claimed that they can improve functions in fields such as welding and minimally invasive surgeries.

Bota Systems added that its force-torque sensors can give robots a sense of touch, enabling them to accurately and reliably perform tasks that were previously only possible with manual operators.

Bota Systems designs for ease of integration

“We are particularly excited to have just announced the release of our latest sensor, the PixONE,” said Ilias Patsiaouras, co-founder and chief technology officer of Bota Systems.

“The PixONE sensor’s innovative hollow shaft design allows it to be seamlessly integrated between the robot’s arm and the end-of-arm tooling [EOAT], maintaining the integrity of internal cable routing,” he added. “This design is particularly advantageous as many robotic arm manufacturers and OEMs are moving towards internal routing to eliminate cable tangles and motion restrictions.”

Bota Systems is an official distribution and integration partner of Universal Robots and Mecademic.

In October 2023, the company added NEXT Robotics to its distributor network. NEXT is now its official distributor for the German-speaking countries of Germany, Austria, and Switzerland. That same month, Bota Systems raised $2.5 million in seed funding.

See sensors at the Robotics Summit & Expo

“Our vision is to equip robots with the sense of touch, making them not only safer and more user-friendly, but also more collaborative,” Klajd Lika, co-founder and CEO of Bota Systems, told The Robot Report. “We look forward to the Robotics Summit and Expo because it brings together the visionaries and brightest minds of the industry — this interaction is valuable for us to shape the development of our next generation of innovative sensors,” 

This will be the largest Robotics Summit & Expo ever. It will include more than 200 exhibitors, various networking opportunities, a Women in Robotics breakfast, a career fair, an engineering theater, a startup showcase, and more. Registration is now open for the event.

The post Bota Systems to showcase its latest sensors at Robotics Summit appeared first on The Robot Report.

]]>
https://www.therobotreport.com/bota-systems-to-showcase-its-latest-sensors-at-robotics-summit/feed/ 0
March 2024 robotics investments total $642M https://www.therobotreport.com/march-2024-robotics-investments-total-642m/ https://www.therobotreport.com/march-2024-robotics-investments-total-642m/#respond Thu, 18 Apr 2024 14:14:18 +0000 https://www.therobotreport.com/?p=578749 March 2024 robotics funding was buoyed by significant investment into software and drone suppliers.

The post March 2024 robotics investments total $642M appeared first on The Robot Report.

]]>
March 2024 robotics investments fell from the prior month.

Chinese and U.S. companies led March 2024 robotics investments. Credit: Eacon Mining, Dan Kara

Thirty-seven robotics firms received funding in March 2024, pulling in a total monthly investment of $642 million. March’s investment figure was significantly less than February’s mark of approximately $2 billion, but it was in keeping with other monthly investments in 2023 and early 2024 (see Figure 1, below).

March2024 investments dropped from the previous month.

California companies secure investment

As described in Table 1 below, the two largest robotics investments in March were secured by software suppliers. Applied Intuition, a provider of software infrastructure to deploy autonomous vehicles at scale, received a $250 million Series E round, while Physical Intelligence, a developer of foundation models and other software for robots and actuated devices, attracted $70 million in a seed round. Both firms are located in California.

Other California firms receiving substantial rounds included Bear Robotics, a manufacturer of self-driving indoor robots that raised a $60 million Series C round, and unmanned aerial system (UAS) developer Firestorm, whose seed funding was $20 million. For a PDF version of Table 1, click here.

March 2024 robotics investments

CompanyAmount ($)RoundCountryTechnology
Agilis Robotics10,000,000Series AChinaSurgical/interventional systems
AloftEstimateOtherU.S.Drones, data acquisition / processing / management
Applied Intuition250,000,000Series EU.S.Software
Automated Architecture3,280,000EstimateU.K.Micro-factories
Bear RoboBear Roboticstics60,000,000Series CU.S.Indoor mobile platforms
BIOBOT Surgical18,000,000Series BSingaporeSurgical systems
Buzz Solutions5,000,000OtherU.S.Drone inspection
Cambrian Robotics3,500,000SeedU.K.Machine vision
Coctrl13,891,783Series BChinaSoftware
DRONAMICS10,861,702GrantU.K.Drones
Eacon Mining41,804,272Series CChinaAutonomous transportation, sensors
ECEON RoboticsEstimatePre-seedGermanyAutonomous forklifts
ESTAT AutomationEstimateGrantU.S.Actuators / motors / servos
Fieldwork Robotics758,181GrantU.K.Outdoor mobile manipulation platforms, sensors
Firestorm Labs20,519,500SeedU.S.Drones
Freespace RoboticsEstimateOtherU.S.Automated storage and retrieval systems
Gather AI17,000,000Series AU.S.Drones, software
Glacier7,700,000OtherU.S.Articulated robots, sensors
IVY TECH Ltd.421,435GrantU.K.Outdoor mobile platforms
KAIKAKUEstimatePre-seedU.K.Collaborative robots
KEF RoboticsEstimateGrantU.S.Drone software
Langyu RobotEstimateOtherChinaAutomated guided vehicles, software
Linkwiz2,679,725OtherJapanSoftware
MotionalEstimateSeedU.S.Autonomous transportation systems
Orchard Robotics3,800,000Pre-seedU.S.Crop management
Pattern Labs8,499,994OtherU.S.Indoor and outdoor mobile platforms
Physical Intelligence70,000,000SeedU.S.Software
PiximoEstimateGrantU.S.Indoor mobile platforms
Preneu11,314,492Series BKoreaDrones
QibiTech5,333,884OtherJapanSoftware, operator services, uncrewed ground vehicles
Rapyuta RoboticsEstimateOtherJapanIndoor mobile platforms, autonomous forklifts
RIOS Intelligent Machines13,000,000Series BU.S.Machine vision
RITS13,901,825Series AChinaSensors, software
Robovision42,000,000OtherBelgiumComputer vision, AI
Ruoyu Technology6,945,312SeedChinaSoftware
Sanctuary Cognitive SystemsEstimateOtherCanadaHumanoids / bipeds, software
SeaTrac Systems899,955OtherU.S.Uncrewed surface vessels
TechMagic16,726,008Series CJapanArticulated robots, sensors
Thor PowerEstimateSeedChinaArticulated robots
Viam45,000,000Series BGermanySmart machines
WIRobotics9,659,374Series AS. KoreaExoskeletons, consumer, home healthcare
X SquareEstimateSeedU.S.Software
YindatongEstimateSeedChinaSurgical / interventional systems
Zhicheng PowerEstimateSeries AChinaConsumer / household
Zhongke HuilingEstimateSeedChinaHumanoids / bipeds, microcontrollers / microprocessors / SoC

Drones get fuel for takeoff in March 2024

Providers of drones, drone technologies, and drone services also attracted substantial individual investments in March 2024. Examples included Firestorm and Gather AI, a developer of inventory monitoring drones whose Series A was $17 million.

In addition, drone services provider Preneu obtained $11 million in Series B funding, and DRONAMICS, a developer of drone technology for cargo transportation and logistics operations, got a grant worth $10.8 million.

Companies in U.S. and China received the majority of the March 2024 funding, at $451 million and $100 million, respectively (see Figure 2, below).

Companies based in Japan and the U.K. were also well represented among the March 2024 investment totals. Four companies in Japan secured a total of $34.7 million, while an equal number of firms in the U.K. attracted $13.5 million in funding.

 

March 2024 robotics investment by country.

Nearly 40% of March’s robotics investments came from a single Series E round — that of Applied Intuition. The remaining funding classes were all represented in March 2024 (Figure 3, below).

March 2024 robotics funding by type and amounts.

Editor’s notes

What defines robotics investments? The answer to this simple question is central in any attempt to quantify them with some degree of rigor. To make investment analyses consistent, repeatable, and valuable, it is critical to wring out as much subjectivity as possible during the evaluation process. This begins with a definition of terms and a description of assumptions.

Investors and investing

Investment should come from venture capital firms, corporate investment groups, angel investors, and other sources. Friends-and-family investments, government/non-governmental agency grants, and crowd-sourced funding are excluded.

Robotics and intelligent systems companies

Robotics companies must generate or expect to generate revenue from the production of robotics products (that sense, analyze, and act in the physical world), hardware or software subsystems and enabling technologies for robots, or services supporting robotics devices. For this analysis, autonomous vehicles (including technologies that support autonomous driving) and drones are considered robots, while 3D printers, CNC systems, and various types of “hard” automation are not.

Companies that are “robotic” in name only, or use the term “robot” to describe products and services that do not enable or support devices acting in the physical world, are excluded. For example, this includes “software robots” and robotic process automation. Many firms have multiple locations in different countries. Company locations given in the analysis are based on the publicly listed headquarters in legal documents, press releases, etc.

Verification

Funding information is collected from several public and private sources. These include press releases from corporations and investment groups, corporate briefings, market research firms, and association and industry publications. In addition, information comes from sessions at conferences and seminars, as well as during private interviews with industry representatives, investors, and others. Unverifiable investments are excluded and estimates are made where investment amounts are not provided or are unclear.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


The post March 2024 robotics investments total $642M appeared first on The Robot Report.

]]>
https://www.therobotreport.com/march-2024-robotics-investments-total-642m/feed/ 0
Project CETI develops robotics to make sperm whale tagging more humane https://www.therobotreport.com/project-ceti-robotics-make-sperm-whale-tagging-more-humane/ https://www.therobotreport.com/project-ceti-robotics-make-sperm-whale-tagging-more-humane/#respond Sun, 14 Apr 2024 12:00:50 +0000 https://www.therobotreport.com/?p=578695 Project CETI is using robotics, machine learning, biology, linguistics, natural language processing, and more to decode whale communications. 

The post Project CETI develops robotics to make sperm whale tagging more humane appeared first on The Robot Report.

]]>
Sperm whales in the ocean.

Project CETI is a nonprofit scientific and conservation initiative that aims to decode whale communications. | Source: Project CETI

Off the idyllic shores of Dominica, a country in the Caribbean, hundreds of sperm whales gather deep in the sea. While their communication sounds like a series of clicks and creaks to the human ear, these whales have unique, regional dialects and even accents. A multidisciplinary group of scientists, led by Project CETI, is using soft robotics, machine learning, biology, linguistics, natural language processing, and more to decode their communications. 

Founded in 2020, Project CETI, or the Cetacean Translation Initiative, is a nonprofit organization dedicated to listening to and translating the communication systems of sperm whales. The team is using specially created tags that latch onto whales and gather information for the team to decode. Getting these tags to stay on the whales, however, is no easy task. 

“One of our core philosophies is we could never break the skin. We can never draw blood. These are just our own, personal guidelines,” David Gruber, the founder and president of Project CETI, told The Robot Report

“[The tags] have four suction cups on them,” he said. “On one of the suction cups is a heart sensor, so you can get the heart rate of the whale. There’s also three microphones on the front of it, so you hear the whale that it’s on, and you can know the whales that’s around it and in front of it.

“So you’ll be able to know from three different microphones the location of the whales that are speaking around it,” explained Gruber. “There’s a depth sensor in there, so you can actually see when the whale was diving and so you can see the profiles of it going up and down. There’s a temperature sensor. There’s an IMU, and it’s like a gyroscope, so you can know the position of the whale.”


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Finding a humane way to tag whales

One of the core principles of Project CETI, according to Gruber, is to use technology to bring people closer to animals. 

“There was a quote by Stephen Hawking in a BBC article, in which he posited that the full development of AI and robotics would lead to the extinction of the human race,” Gruber said. “And we thought, ‘This is ridiculous, why would scientists develop something that would lead to our own extinction?’ And it really inspired us to counter this narrative and be like, ‘How can we make robots that are actually very gentle and increase empathy?’”

“In order to deploy those tags onto whales, what we needed was a form of gentle, stable, reversible adhesion,” Alyssa Hernandez, a functional morphologist, entomologist, and biomechanist on the CETI team, told The Robot Report. “So something that can be attached to the whale, where it would go on and remain on the whale for a long amount of time to collect the data, but still be able to release itself eventually, whether naturally by the movements of the whale, or by our own mechanism of sort of releasing the tag itself.”

This is what led the team to explore bio-inspired techniques of adhesion. In particular, the team settled on studying suction cups that are common in marine creatures. 

“Suction discs are pretty common in aquatic systems,” said Hernandez. “They show up in multiple groups of organisms, fish, cephalopods, and even aquatic insects. And there are variations often on each of these discs in terms of the morphology of these discs, and what elements these discs have.”

Hernandez was able to draw on her biology background to design suction-cup grippers that would work particularly well on sperm whales that are constantly moving through the water. This means the suction cup would have to withstand changing pressures and forces. They can stay on a whale’s uneven skin even when it’s moving. 

“In the early days, when we first started this project, the question was, ‘Would the soft robots even survive in the deep sea?’” said Gruber. 

Project CETI.

An overview of Project CETI’s mission. | Source: Project CETI

How suction cup shape changes performance

“We often think of suction cups as round, singular material elements, and in biology, that’s not usually the case,” noted Hernandez. “Sometimes these suction disks are sort of elongated or slightly different shaped, and oftentimes they have this sealing rim that helps them keep the suction engaged on rough surfaces.”

Hernandez said the CETI team started off with a standard, circular suction cup. Initially, the researchers tried out multiple materials and combinations of stiff backings and soft rims. Drawing on her biology experience, Hernandez began to experiment with more elongated, ellipse shapes. 

“I often saw [elongated grippers] when I was in museums looking at biological specimens or in the literature, so I wanted to look at an ellipse-shaped cup,” Hernandez said. “So I ended up designing one that was a medium-sized ellipse, and then a thinner ellipse as well. Another general design that I saw was more of this teardrop shape, so smaller at one end and wider at the base.” 

Hernadez said the team also looked at peanut-shaped grippers. In trying these different shapes, she looked for one that would provide increased resistance over the more traditional circular suction cups. 

“We tested [the grippers] on different surfaces of different roughness and different compliance,” recalled Hernandez. “We ended up finding that compared to the standard circle, and variations of ellipses, this medium-sized ellipse performed better under shear conditions.” 

She said the teardrop-shaped gripper also performed well in lab testing. These shapes performed better because, unlike a circle, they don’t have a uniform stiffness throughout the cup, allowing them to bend with the whale as it moves. 

“Now, I’ve modified [the suction cups] a bit to fit our tag that we currently have,” Hernandez said. “So, I have some versions of those cups that are ready to be deployed on the tags.”

Project CETI boat with people interacting with drones.

Project CETI uses drones to monitor sperm whale movements and to place the tags on the whales. | Source: Project CETI

Project CETI continues iterating

The Project CETI team is actively deploying its tags using a number of methods, including having biologists press them onto whales using long poles, a method called pole tagging, and using drones to press the tags onto the whales. 

Once they’re on the whale, they stay on for anywhere from a few hours to a few days. Once they fall off, the CETI team has a mechanism that allows them to track the tags down and pull all of the gathered data off of them. CETI isn’t interested in making tags that can stay on the whales long-term, because sperm whales can travel long distances in just a few days, and it could hinder their ability to track the tags down once they fall off. 

The CETI team said it plans to continue iterating on the suction grippers and trying new ways to gently get crucial data from sperm whales. It’s even looking into tags that would be able to slightly crawl to different positions on the whale to gather information about what the whale is eating, Gruber said. The team is also interested in exploring tags that could recharge themselves. 

“We’re always continuing to make things more and more gentle, more and more innovative,” said Gruber. “And putting that theme forward of how can we be almost invisible in this project.”

The post Project CETI develops robotics to make sperm whale tagging more humane appeared first on The Robot Report.

]]>
https://www.therobotreport.com/project-ceti-robotics-make-sperm-whale-tagging-more-humane/feed/ 0