Artificial Intelligence / Cognition Archives - The Robot Report https://www.therobotreport.com/category/design-development/ai-cognition/ Robotics news, research and analysis Wed, 26 Jun 2024 17:58:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Artificial Intelligence / Cognition Archives - The Robot Report https://www.therobotreport.com/category/design-development/ai-cognition/ 32 32 RTI Connext to deliver real-time data connectivity to NVIDIA Holoscan https://www.therobotreport.com/rti-connext-delivers-real-time-data-connectivity-nvidia-holoscan/ https://www.therobotreport.com/rti-connext-delivers-real-time-data-connectivity-nvidia-holoscan/#respond Tue, 25 Jun 2024 12:00:50 +0000 https://www.therobotreport.com/?p=579541 RTI Connext provides reliable communications for users of NVIDIA's Holoscan SDK to speed development of devices such as surgical robots.

The post RTI Connext to deliver real-time data connectivity to NVIDIA Holoscan appeared first on The Robot Report.

]]>
RTI Connext and NVIDIA Holoscan can help medical device developers.

Medical device developers can now use RTI Connext and NVIDIA Holoscan. Source: Real-Time Innovations

Devices such as surgical robots need access to distributed, reliable, and continuous data streaming across different sensors and devices. Real-Time Innovations, or RTI, today said it is collaborating with NVIDIA Corp. to deliver real-time data connectivity for the NVIDIA Holoscan software development kit with RTI Connext.

“Connectivity is the foundation for cutting-edge technologies, such as AI, that are transforming the medtech industry and beyond,” stated Darren Porras, market development manager for medical at Real-Time Innovations. “We’re proud to work with NVIDIA to harness the transformative power of AI to revolutionize healthcare.”

“By providing competitive, tailored solutions, we are paving the way for sustainable business value across the healthcare, automotive, and industrial sectors, marking an important step toward a future where technology enhances the quality of life and drives innovation,” he added.

Founded in 1991, Real-Time Innovations claimed that it has 2,000 customer designs and that its software runs more than 250 autonomous vehicle programs, controls North America’s largest power plants, and integrates over 400 defense programs. The Sunnyvale, Calif.-based company said its systems also support next-generation medical technologies and surgical robots, Canada’s air traffic control, and NASA’s launch-control systems.

RTI Connext designed to reliably distribute data

The RTI Connext software framework enables users to build intelligent distributed systems that combine advanced sensing, fast control, and artificial intelligence algorithms, said Real-Time Innovations. This can help developers bring capable systems to market faster, it said.

“Connext facilitates interoperable and real-time communication for complex, intelligent systems in the healthcare industry and beyond,” according to RTI. It is based on the Data Distribution Service (DDS) standard and has been proven across industries to reliably communicate data, the company said.

Product teams can now efficiently build and deploy AI-enabled applications and distributed systems that require low-latency and reliable data sharing for sensor and video processing. Connext, which is available for free trials, allows applications to work together as one, said RTI.

NVIDIA Holoscan gets advanced data flows

RTI Connext provides a connectivity framework for the NVIDIA Holoscan software development kit (SDK), offering integration across various systems and sensors to complement its AI capabilities. 

“Enterprises are looking for advanced software-defined architectures that deliver on low latency, flexibility, reliability, scalability, and cybersecurity,” said David Niewolny, director of business development for healthcare and medical at NVIDIA. “With RTI Connext and NVIDIA Holoscan, medical technology developers can accelerate their software-defined product visions by leveraging infrastructure purpose-built for healthcare applications.”

Connext now integrates with NVIDIA’s AI sensor-processing pipelines and reference workflows, bolstering data flows and real-time AI processing across a system of systems. With capabilities for real-time visualization and data-driven insights, the technologies can help drive more precise and automated minimally invasive procedures, clinical monitoring, and next-generation medical imaging platforms. They can also help developers create smarter, integrated systems across industries, said the partners.

NVIDIA said Holoscan offers the software and hardware needed to build AI applications and deploy sensor-processing capabilities from edge to cloud. This can help companies explore new capabilities, accelerate time to market, and lower costs, said the Santa Clara, Calif.-based company.

NVIDIA Holoscan now supports interoperability with a wide range of legacy systems, such as Windows-based medical devices, real-time operating system nodes in surgical robots, and patient-monitoring systems, through RTI Connext.

The post RTI Connext to deliver real-time data connectivity to NVIDIA Holoscan appeared first on The Robot Report.

]]>
https://www.therobotreport.com/rti-connext-delivers-real-time-data-connectivity-nvidia-holoscan/feed/ 0
Wayve launches PRISM-1 4D reconstruction model for autonomous driving https://www.therobotreport.com/wayve-launches-prism-1-4d-reconstruction-model-for-autonomous-driving/ https://www.therobotreport.com/wayve-launches-prism-1-4d-reconstruction-model-for-autonomous-driving/#respond Tue, 18 Jun 2024 17:15:35 +0000 https://www.therobotreport.com/?p=579482 Wayve says PRISM-1 enables scalable, realistic re-simulations of complex scenes with minimal engineering or labeling input. 

The post Wayve launches PRISM-1 4D reconstruction model for autonomous driving appeared first on The Robot Report.

]]>
A scene reconstructed by Wayve's PRISM-1 technology.

A scene reconstructed by Wayve’s PRISM-1 technology. | Source: Wayve

Wayve, a developer of embodied artificial intelligence, launched PRISM-1, a 4D reconstruction model that it said can enhance the testing and training of its autonomous driving technology. 

The London-based company first showed the technology in December 2023 through its Ghost Gym neural simulator. Wayve used novel view synthesis to create precise 4D scene reconstructions (three dimensions in space plus time) using only camera inputs.

It achieved this using unique methods that it claimed will accurately and efficiently simulating the dynamics of complex and unstructured environments for advanced driver-assist systems (ADAS) and self-driving vehicles. PRISM-1 is the model that powers the next generation of Ghost Gym simulations. 

“PRISM-1 bridges the gap between the real world and our simulator,” stated Jamie Shotton, chief scientist at Wayve. “By enhancing our simulation platform with accurate dynamic representations, Wayve can extensively test, validate, and fine-tune our AI models at scale.”

“We are building embodied AI technology that generalizes and scales,” he added. “To achieve this, we continue to advance our end-to-end AI capabilities, not only in our driving models, but also through enabling technologies like PRISM-1. We are also excited to publicly release our WayveScenes101 dataset, developed in conjunction with PRISM-1, to foster more innovation and research in novel view synthesis for driving.”

PRISM-1 excels at realism in simulation, Wayve says

Wayve said PRISM-1 enables scalable, realistic re-simulations of complex driving scenes with minimal engineering or labeling input. 

Unlike traditional methods, which rely on lidar and 3D bounding boxes, PRISM-1 uses novel synthesis techniques to accurately depict moving elements like pedestrians, cyclists, vehicles, and traffic lights. The system includes precise details, like clothing patterns, brake lights, and windshield wipers. 

Achieving realism is critical for building an effective training simulator and evaluating driving technologies, according to Wayve. Traditional simulation technologies treat vehicles as rigid entities and fail to capture safety-critical dynamic behaviors like indicator lights or sudden braking. 

PRISM-1, on the other hand, uses a flexible framework that can identify and track changes in the appearance of scene elements over time, said the company. This enables it to precisely re-simulate complex dynamic scenarios with elements that change in shape and move throughout the scene. 

It can distinguish between static and dynamic elements in a shelf-supervised manner, avoiding the need for explicit labels, scene graphs, and bounding boxes to define the configuration of a busy street.

Wayve said this approach maintains efficiency, even as scene complexity increases, ensuring that more complex scenarios do not require additional engineering effort. This makes PRISM-1 a scalable and efficient system for simulating complex urban environments, it asserted.

WayveScenes 101 benchmark released

Wayve also released its WayveScenes 101 Benchmark. This dataset comprises 101 diverse driving scenarios from the U.K. and the U.S. It includes urban, suburban, and highway scenes over various weather and lighting conditions. 

The company says it aims for this dataset to support the AI research community in advancing novel view synthesis models and the development of more robust and accurate scene representation models for driving. 

Last month, Wayve closed a $1.05 billion Series C funding round. SoftBank Group led the round, which also included new investor NVIDIA and existing investor Microsoft.

Since its founding, Wayve has developed and tested its autonomous driving system on public roads. It has also developed foundation models for autonomy, similar to “GPT for driving,” that it says can empower any vehicle to perceive its surroundings and safely drive through diverse environments. 

The post Wayve launches PRISM-1 4D reconstruction model for autonomous driving appeared first on The Robot Report.

]]>
https://www.therobotreport.com/wayve-launches-prism-1-4d-reconstruction-model-for-autonomous-driving/feed/ 0
Waabi raises $200M from Uber, NVIDIA, and others on the road to self-driving trucks https://www.therobotreport.com/waabi-raises-200m-uber-nvidia-on-the-road-self-driving-trucks/ https://www.therobotreport.com/waabi-raises-200m-uber-nvidia-on-the-road-self-driving-trucks/#respond Tue, 18 Jun 2024 12:40:06 +0000 https://www.therobotreport.com/?p=579477 Waabi, which has been developing self-driving trucks using generative AI, plans to put its systems on Texas roads in 2025.

The post Waabi raises $200M from Uber, NVIDIA, and others on the road to self-driving trucks appeared first on The Robot Report.

]]>
The Waabi Driver includes a generative AI stack as well as sensors and compute hardware.

The Waabi Driver includes a generative AI stack as well as sensors and compute hardware. Source: Waabi

Autonomous passenger vehicles have hit potholes over the past few years, with accidents leading to regulatory scrutiny, but investment in self-driving trucks has continued. Waabi today announced that it has raised $200 million in an oversubscribed Series B round. The funding brings total investment in the Toronto-based startup to more than $280 million.

Waabi said that it “is on the verge of Level 4 autonomy” and that it expects to deploy fully autonomous trucks in Texas next year. The company claimed that it has been able to advance quickly toward that goal because of its use of generative artificial intelligence in the physical world.

“I have spent most of my professional life dedicated to inventing new AI technologies that can deliver on the enormous potential of AI in the physical world in a provably safe and scalable way,” stated Raquel Urtasun, a professor at the University of Toronto and founder and CEO of Waabi.

“Over the past three years, alongside the incredible team at Waabi, I have had the chance to turn these breakthroughs into a revolutionary product that has far surpassed my expectations,” she added. “We have everything we need — breakthrough technology, an incredible team, and pioneering partners and investors — to launch fully driverless autonomous trucks in 2025. This is monumental for the industry and truly marks the beginning of the next frontier for AI.”

Waabi uses generative AI to reduce on-road testing

Waabi said it is pioneering generative AI for the physical world, starting with applying the technology to self-driving trucks. The company said it has developed “a single end-to-end AI system that is capable of human-like reasoning, enabling it to generalize to any situation that might happen on the road, including those it has never seen before.”

Because of that ability to generalize, the system requires significantly less training data and compute resources in comparison with approaches to autonomy, asserted Waabi. In addition, the company claimed that its system is fully interpretable and that its safety can be validated and verified.

The company said Copilot4D, its “end-to-end AI system, paired with Waabi World, the world’s most advanced simulator, reduces the need for extensive on-road testing and enables a safer, more efficient solution that is highly performant and scalable from Day 1.”

Several industry observers have pointed out that self-driving trucks will likely arrive on public roads before widespread deployments of robotaxis in the U.S. While Waymo has pumped the brakes on development, other companies have made progress, including Inceptio, FERNRIDE, Kodiak Robotics, and Aurora.

At the same time, work on self-driving cars continues, with Wayve raising $1.05 billion last month and TIER IV obtaining $54 million. General Motors invested another $850 million in Cruise yesterday.

“Self-driving technology is a prime example of how AI can dramatically improve our lives,” said AI luminary Geoff Hinton. “Raquel and Waabi are at the forefront of innovation, developing a revolutionary approach that radically changes the way autonomous systems work and leads to safer and more efficient solutions.”

Waabi plans to expand its commercial operations and grow its team in Canada and the U.S. The company cited recent accomplishments, including the opening of its new Texas AV trucking terminal, a collaboration with NVIDIA to integrate NVIDIA DRIVE Thor into the Waabi Driver, and its ongoing partnership with Uber Freight. It has run autonomous shipments for Fortune 500 companies and top-tier shippers in Texas.

Copilot4D predicts future LiDAR point clouds from a history of past LiDAR observations, akin to how LLMs predict the next word given the preceding text. We design a 3 stage architecture that is able to exploit all the breakthroughs in LLMs to bring the first 4D foundation model.

Copilot4D predicts future lidar point clouds from a history of past observations, similar to how large language models (LLMs) predict the next word given the preceding text. Source: Waabi

Technology leaders invest in self-driving trucks

Waabi noted that top AI, automotive, and logistics enterprises were among its investors. Uber and Khosla Ventures led Waabi’s Series B round. Other participants included NVIDIA, Volvo Group Venture Capital, Porsche Automobil Holding, Scania Invest, and Ingka Investments.

“Waabi is developing autonomous trucking by applying cutting-edge generative AI to the physical world,” said Jensen Huang, founder and CEO of NVIDIA. “I’m excited to support Raquel’s vision through our investment in Waabi, which is powered by NVIDIA technology. I have championed Raquel’s pioneering work in AI for more than a decade. Her tenacity to solve the impossible is an inspiration.”

Additional support came from HarbourVest Partners, G2 Venture Partners, BDC Capital’s Thrive Venture Fund, Export Development Canada, Radical Ventures, Incharge Capital, and others.

“We are big believers in the potential for autonomous technology to revolutionize transportation, making a safer and more sustainable future possible,” added Dara Khosrowshahi, CEO of Uber. “Raquel is a visionary in the field, and under her leadership, Waabi’s AI-first approach provides a solution that is extremely exciting in both its scalability and capital efficiency.”

Vinod Khosla, founder of Khosla Ventures, said: “Change never comes from incumbents but from the innovation of entrepreneurs that challenge the status quo. Raquel and her team at Waabi have done exactly that with their products and business execution. We backed Waabi very early on with the bet that generative AI would transform transportation and are thrilled to continue on this journey with them as they move towards commercialization.”

The post Waabi raises $200M from Uber, NVIDIA, and others on the road to self-driving trucks appeared first on The Robot Report.

]]>
https://www.therobotreport.com/waabi-raises-200m-uber-nvidia-on-the-road-self-driving-trucks/feed/ 0
At CVPR, NVIDIA offers Omniverse microservices, shows advances in visual generative AI https://www.therobotreport.com/nvidia-offers-omniverse-microservices-advances-visual-generative-ai-cvpr/ https://www.therobotreport.com/nvidia-offers-omniverse-microservices-advances-visual-generative-ai-cvpr/#respond Mon, 17 Jun 2024 13:00:07 +0000 https://www.therobotreport.com/?p=579457 Omniverse Cloud Sensor RTX can generate synthetic data for robotics, says NVIDIA, which is presenting over 50 research papers at CVPR.

The post At CVPR, NVIDIA offers Omniverse microservices, shows advances in visual generative AI appeared first on The Robot Report.

]]>
NVIDIA Omniverse Cloud Sensor RTX Generates Synthetic Data to Speed AI Development of Autonomous Vehicles, Robotic Arms, Mobile Robots, Humanoids and Smart Spaces

As shown at CVPR, Omniverse Cloud Sensor RTX microservices generate high-fidelity sensor simulation from
an autonomous vehicle (left) and an autonomous mobile robot (right). Sources: NVIDIA, Fraunhofer IML (right)

NVIDIA Corp. today announced NVIDIA Omniverse Cloud Sensor RTX, a set of microservices that enable physically accurate sensor simulation to accelerate the development of all kinds of autonomous machines.

NVIDIA researchers are also presenting 50 research projects around visual generative AI at the Computer Vision and Pattern Recognition, or CVPR, conference this week in Seattle. They include new techniques to create and interpret images, videos, and 3D environments. In addition, the company said it has created its largest indoor synthetic dataset with Omniverse for CVPR’s AI City Challenge.

Sensors provide industrial manipulators, mobile robots, autonomous vehicles, humanoids, and smart spaces with the data they need to comprehend the physical world and make informed decisions.

NVIDIA said developers can use Omniverse Cloud Sensor RTX to test sensor perception and associated AI software in physically accurate, realistic virtual environments before real-world deployment. This can enhance safety while saving time and costs, it said.

“Developing safe and reliable autonomous machines powered by generative physical AI requires training and testing in physically based virtual worlds,” stated Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA. “Omniverse Cloud Sensor RTX microservices will enable developers to easily build large-scale digital twins of factories, cities and even Earth — helping accelerate the next wave of AI.”

Omniverse Cloud Sensor RTX supports simulation at scale

Built on the OpenUSD framework and powered by NVIDIA RTX ray-tracing and neural-rendering technologies, Omniverse Cloud Sensor RTX combines real-world data from videos, cameras, radar, and lidar with synthetic data.

Omniverse Cloud Sensor RTX includes software application programming interfaces (APIs) to accelerate the development of autonomous machines for any industry, NVIDIA said.

Even for scenarios with limited real-world data, the microservices can simulate a broad range of activities, claimed the company. It cited examples such as whether a robotic arm is operating correctly, an airport luggage carousel is functional, a tree branch is blocking a roadway, a factory conveyor belt is in motion, or a robot or person is nearby.

Microservice to be available for AV development 

CARLA, Foretellix, and MathWorks are among the first software developers with access to Omniverse Cloud Sensor RTX for autonomous vehicles (AVs). The microservices will also enable sensor makers to validate and integrate digital twins of their systems in virtual environments, reducing the time needed for physical prototyping, said NVIDIA.

Omniverse Cloud Sensor RTX will be generally available later this year. NVIDIA noted that its announcement coincided with its first-place win at the Autonomous Grand Challenge for End-to-End Driving at Scale at CVPR.

The NVIDIA researchers’ winning workflow can be replicated in high-fidelity simulated environments with Omniverse Cloud Sensor RTX. Developers can use it to test self-driving scenarios in physically accurate environments before deploying AVs in the real world, said the company.

Two of NVIDIA’s papers — one on the training dynamics of diffusion models and another on high-definition maps for autonomous vehicles — are finalists for the Best Paper Awards at CVPR.

The company also said its win for the End-to-End Driving at Scale track demonstrates its use of generative AI for comprehensive self-driving models. The winning submission outperformed more than 450 entries worldwide and received CVPR’s Innovation Award.

Collectively, the work introduces artificial intelligence models that could accelerate the training of robots for manufacturing, enable artists to more quickly realize their visions, and help healthcare workers process radiology reports.

“Artificial intelligence — and generative AI in particular — represents a pivotal technological advancement,” said Jan Kautz, vice president of learning and perception research at NVIDIA. “At CVPR, NVIDIA Research is sharing how we’re pushing the boundaries of what’s possible — from powerful image-generation models that could supercharge professional creators to autonomous driving software that could help enable next-generation self-driving cars.”

Foundation model eases object pose estimation

NVIDIA researchers at CVPR are also presenting FoundationPose, a foundation model for object pose estimation and tracking that can be instantly applied to new objects during inference, without the need for fine tuning. The model uses either a small set of reference images or a 3D representation of an object to understand its shape. It set a new record on a benchmark for object pose estimation.

FoundationPose can then identify and track how that object moves and rotates in 3D across a video, even in poor lighting conditions or complex scenes with visual obstructions, explained NVIDIA.

Industrial robots could use FoundationPose to identify and track the objects they interact with. Augmented reality (AR) applications could also use it with AI to overlay visuals on a live scene.

NeRFDeformer transforms data from a single image

NVIDIA’s research includes a text-to-image model that can be customized to depict a specific object or character, a new model for object-pose estimation, a technique to edit neural radiance fields (NeRFs), and a visual language model that can understand memes. Additional papers introduce domain-specific innovations for industries including automotive, healthcare, and robotics.

A NeRF is an AI model that can render a 3D scene based on a series of 2D images taken from different positions in the environment. In robotics, NeRFs can generate immersive 3D renders of complex real-world scenes, such as a cluttered room or a construction site.

However, to make any changes, developers would need to manually define how the scene has transformed — or remake the NeRF entirely.

Researchers from the University of Illinois Urbana-Champaign and NVIDIA have simplified the process with NeRFDeformer. The method can transform an existing NeRF using a single RGB-D image, which is a combination of a normal photo and a depth map that captures how far each object in a scene is from the camera.

NVIDIA researchers have simplified the process of generating a 3D scene from 2D images using NeRFs.

Researchers have simplified the process of generating a 3D scene from 2D images using NeRFs. Source: NVIDIA

JeDi model shows how to simplify image creation at CVPR

Creators typically use diffusion models to generate specific images based on text prompts. Prior research focused on the user training a model on a custom dataset, but the fine-tuning process can be time-consuming and inaccessible to general users, said NVIDIA.

JeDi, a paper by researchers from Johns Hopkins University, Toyota Technological Institute at Chicago, and NVIDIA, proposes a new technique that allows users to personalize the output of a diffusion model within a couple of seconds using reference images. The team found that the model outperforms existing methods.

NVIDIA added that JeDi can be combined with retrieval-augmented generation, or RAG, to generate visuals specific to a database, such as a brand’s product catalog.

JeDi is a new technique that allows users to easily personalize the output of a diffusion model within a couple of seconds using reference images, like an astronaut cat that can be placed in different environments.

JeDi is a new technique that allows users to easily personalize the output of a diffusion model within a couple of seconds using reference images, like an astronaut cat that can be placed in different environments. Source: NVIDIA

Visual language model helps AI get the picture

NVIDIA said it has collaborated with the Massachusetts Institute of Technology (MIT) to advance the state of the art for vision language models, which are generative AI models that can process videos, images, and text. The partners developed VILA, a family of open-source visual language models that they said outperforms prior neural networks on benchmarks that test how well AI models answer questions about images.

VILA’s pretraining process provided enhanced world knowledge, stronger in-context learning, and the ability to reason across multiple images, claimed the MIT and NVIDIA team.

The VILA model family can be optimized for inference using the NVIDIA TensorRT-LLM open-source library and can be deployed on NVIDIA GPUs in data centers, workstations, and edge devices.

As shown at CVPR, VILA can understand memes and reason based on multiple images or video frames.

VILA can understand memes and reason based on multiple images or video frames. Source: NVIDIA

Generative AI drives AV, smart city research at CVPR

NVIDIA Research has hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars, and robotics. A dozen of the NVIDIA-authored CVPR papers focus on autonomous vehicle research.

Producing and Leveraging Online Map Uncertainty in Trajectory Prediction,” a paper authored by researchers from the University of Toronto and NVIDIA, has been selected as one of 24 finalists for CVPR’s best paper award.

In addition, Sanja Fidler, vice president of AI research at NVIDIA, will present on vision language models at the Workshop on Autonomous Driving today.

NVIDIA has contributed to the CVPR AI City Challenge for the eighth consecutive year to help advance research and development for smart cities and industrial automation. The challenge’s datasets were generated using NVIDIA Omniverse, a platform of APIs, software development kits (SDKs), and services for building applications and workflows based on Universal Scene Description (OpenUSD).

AI City Challenge synthetic datasets span multiple environments generated by NVIDIA Omniverse, allowing hundreds of teams to test AI models in physical settings such as retail and warehouse environments to enhance operational efficiency.

AI City Challenge synthetic datasets span multiple environments generated by NVIDIA Omniverse, allowing hundreds of teams to test AI models in physical settings such as retail and warehouse environments to enhance operational efficiency. Source: NVIDIA

Isha Salian headshot.About the author

Isha Salian writes about deep learning, science and healthcare, among other topics, as part of NVIDIA’s corporate communications team. She first joined the company as an intern in summer 2015. Isha has a journalism M.A., as well as undergraduate degrees in communication and English, from Stanford.

The post At CVPR, NVIDIA offers Omniverse microservices, shows advances in visual generative AI appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-offers-omniverse-microservices-advances-visual-generative-ai-cvpr/feed/ 0
Collaborative Robotics expands with new Seattle office and AI team https://www.therobotreport.com/collaborative-robotics-expands-with-new-seattle-office-and-ai-team/ https://www.therobotreport.com/collaborative-robotics-expands-with-new-seattle-office-and-ai-team/#respond Fri, 14 Jun 2024 00:01:53 +0000 https://www.therobotreport.com/?p=579406 Collaborative Robotics has established a foundation models AI team and partnered with the University of Washington on research.

The post Collaborative Robotics expands with new Seattle office and AI team appeared first on The Robot Report.

]]>
pixelated, unrecognizable image of a mobile robot pushing a cart in a warehouse.

Collaborative Robotics has kept its actual robot out of public view. | Source: Adobe Stock, Photoshopped by The Robot Report

Collaborative Robotics, a developer of cobots for logistics, today announced the establishment of a Foundation Models AI team. Michael Vogelsong, a founder of Amazon’s Deep Learning Tech team, will lead the new team in Seattle.

“Our cobots are already doing meaningful work in production on behalf of our customers,” stated Brad Porter, CEO of Collaborative Robotics. “Our investment in building a dedicated foundation models AI team for robotics represents a significant step forward as we continue to increase the collaborative potential of our cobots.”

“The foundation models AI team will explore the cutting-edge possibilities of AI in enhancing robotic capabilities, particularly in the area of bimanual manipulation and low-latency multimodal models,” he added. “We aim to achieve a new level of comprehension and control in our robots, enabling them to understand and respond effectively to complex tasks and environments. I am looking forward to seeing the innovations this talented team creates.”

Collaborative Robotics keeps its system under wraps

In April, Collaborative Robotics closed its $100 million Series B round toward commercializing its autonomous mobile manipulator. The company has been very secretive about the actual design of its system, releasing only scant details about the payload capabilities and the fact that is a wheeled collaborative robot.

At the time, Porter told The Robot Report that the new cobot’s base is capable of omnidirectional motion with four wheels and a swerve-drive design, along with a central tower-like structure that can acquire, carry, and place totes and boxes around a warehouse.

Brad Porter of Collaborative Robotics (far right) participated in a debate on whether humanoid robots are reality or hype at Robotics Invest this week in Boston.

Brad Porter of Collaborative Robotics (far right) participated in a debate on whether humanoid robots are reality or hype at Robotics Invest this week in Boston. Credit: Eugene Demaitre

Foundation AI models coming to robotics

Foundation AI models are currently one of the hottest topics in robotics, with many companies investing in both talent and intellectual property to develop the technology. Foundation models offer the promise of generalizing behaviors and reducing the effort to build and maintain special-purpose models.

Collaborative Robotics said its new Foundation Models AI team will concentrate on integrating advanced machine-learning techniques into its production robots. By combining existing foundation models, novel research, and strategic partnerships with the practical experience from running systems live in production environments, the team aims to improve the adaptability and precision of robotic tasks.

Building on the company’s earlier work in developing an Auditable Control and Planning Framework (ACoP), this research will explore how models that process text, vision, and actions can interact and create a real-time feedback loop for adaptive control.

The company also announced a that it is funding Ph.D. work at the University of Washington through a “significant” gift. This gift will sponsor the research of Prof. Sidd Srinivasa, an academic leader in AI and robotics, who also serves as an advisor to Collaborative Robotics.

“The collaboration with Cobot supports our ongoing research at the University of Washington,” said Srinivasa. “Cobot’s commitment to advancing AI and robotics aligns well with our research goals and will help us advance robotic capabilities across multiple dimensions and particularly in the area of bimanual manipulation. ”

Collaborative Robotics plans this month to open its Seattle office, which will serve as a hub for these advanced research activities. The company said it expects the city’s tech ecosystem to support its expansion and research goals.

The post Collaborative Robotics expands with new Seattle office and AI team appeared first on The Robot Report.

]]>
https://www.therobotreport.com/collaborative-robotics-expands-with-new-seattle-office-and-ai-team/feed/ 0
Inside the development of FarmWise’s weeding robot https://www.therobotreport.com/inside-the-development-of-farmwises-weeding-robot/ https://www.therobotreport.com/inside-the-development-of-farmwises-weeding-robot/#respond Tue, 11 Jun 2024 17:52:16 +0000 https://www.therobotreport.com/?p=579382 Learn how FarmWise overcame the challenges of developing a weeding robot with linear slides, computer vision, AI and more.

The post Inside the development of FarmWise’s weeding robot appeared first on The Robot Report.

]]>

FarmWise is an agtech company pushing the boundaries of automation in agriculture by harnessing the power of computer vision and artificial intelligence (AI). Its flagship product, the Vulcan precision weeding implement, is designed to optimize weed control management on vegetable farms in California, which have been slow to automate due to the complex and versatile nature of specialty crop farming.

By combining cutting-edge technology with custom-built components, FarmWise enhances efficiency, increases crop yields, and addresses labor shortages with a high-accuracy and fully mechanized process to remove weeds.

FarmWise won an RBR50 Robotics Innovation Award for the weeding system in 2021 and will be speaking at RoboBusiness, which runs Oct. 16-17 in Santa Clara, Calif.

Vulcan Automated Weeding System

The Vulcan intra-row weeding implement is FarmWise’s answer to the challenges posed by weed competition in vegetable farms. Weeds can adversely impact crop yield by competing for essential resources such as water, light, and nutrients. Traditional cultivation methods, combined with hand weeding, are labor-intensive and costly, especially in regions like California where labor shortages and rising wages are prevalent.

FarmWise’s Vulcan Automated Weeding System is a pull-behind solution focused on in-season weed control management. The system leverages computer vision and AI to address three key challenges associated with weed removal:

  • Precision
  • Labor
  • Herbicides

Traditional cultivation is imprecise. It either leaves some weeds or only partially removes weeds in between rows of plants. Such cultivators are also cumbersome and error-prone due to a lack of automation and precision control that can lead to mistakes such as crop kills.

Hand weeding is more precise, but it requires time-consuming, physically challenging, and repetitive manual labor that is also expensive for producers. Chemically suppressing weeds has been the most common, efficient, and cost-effective method for controlling weeds in row crops. Using herbicides is becoming less attractive for two major reasons: a shortage of herbicides on the market and the environmental call for farmers to use more sustainable weed control methods.

The Vulcan intra-row weeding implement accurately detects and differentiates crops from weeds, allowing for precision weed removal without damaging crops. This level of precision saves farmers up to $250 per acre, maximizes yield potential, and minimizes the need for expensive manual labor.

Key challenges and customization with PBC linear slides

One of the major challenges FarmWise faced was developing a system capable of adapting to the variety of crops, bed spacings, row spacings, and soil morphologies found on vegetable farms. Compared to corn farming in the Midwest, which has undergone significant automation, vegetable farming in California remains labor-intensive due to its complexity.

To meet this need, FarmWise leveraged advancements in deep learning and precision control software to develop Vulcan, which features a perception module combined with an actuator to perform consistent intra- and inter-row weeding at row level across a diverse portfolio of crops.

The weeder module has two translation axes, including a hydraulic z-axis actuator, allowing it to move up to a dozen inches or so vertically. A feeler wheel arrangement locates the weeder module relative to the crop surface and informs it of changes in the bed’s topology. The balance between automation and user control, however, was critical to the success of this application, according to FarmWise senior mechanical engineer David Olivero.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


“While the goal is to maximize automation, we acknowledge the farmers’ expertise,” Olivero explains. “Farmers understand the optimal depth for weeder blades to effectively remove weeds while avoiding root damage, and we wanted to empower farmers by allowing them to adjust blade depth according to their preference for deeper, more effective weeding or shallower weeding to protect crop roots.

To achieve this flexibility and reach in the z-axis, FarmWise specified UGA Low Profile Uni-Guide with a custom-positioned hand brake from PBC Linear. Up to 18 of these slides, located at the back of the implement, add a few inches of vertical travel. This addition extends bed capabilities and accommodates varying soil types. The slides offer a robust and customizable solution to adjust the system’s height, enabling it to cater to various farm configurations and terrains.

“We appreciated PBC Linear’s customization platform, which allowed them to create a slide with a specific mount offset tailored to our unique requirements,” says Olivero. “The low-profile design of the slides was vital to reduce the cantilever length of the weeder module, mitigating the risk of transport shock during field-to-field movement. PBC Linear’s reputation for quality products and ease of customization made them a preferred choice.”

a comparison of weeds on a farm before and after using FarmWise's Vulcan weeding robot.

A comparison of weeds on a farm before and after using FarmWise’s Vulcan weeding robot. | Credit: FarmWise

Role of computer vision and AI

Central to the Vulcan precision weeding implement’s success is the computer vision and AI in the FarmWise Intelligent Plant System (IPS) Scanner. The IPS Scanner integrates lighting with the camera sensor via a custom LED board. This package enables the capturing of consistent, high-resolution images at a high frame-per-second rate. The data immediately flows through the IPS pipeline, detecting and localizing each plant in real time.

Sophisticated detection models were developed by gathering a vast number of images and annotating them to accurately distinguish between individual crops and weeds. Using these detection models, the system determines the position of crops and the location of every crop stem and makes precise decisions on blade openings and adjustments.

As the system traverses the field, it makes micro-adjustments to ensure the highest quality weed removal. The actuation engine, controlled by the software, opens and closes the weeding blades as needed to clean the intra-row, or in between the crops located on the same line. In addition to the weeding blades that are connected to the actuator, the precision weeding implement includes a set of top knives that simultaneously clean the inter-row surface area between the rows of crops.

Operator interface and user control

FarmWise provides an operator interface mounted in the cab of the equipment. This touch screen–based interface enables the operator to set up and verify the system’s configuration for specific crop and field conditions. The operator can adjust precision, blade widths, and other parameters to achieve the desired results. The interface also offers diagnostics and feedback to fine-tune the system’s performance.

FarmWise’s Vulcan intra-row weeding implement represents a significant step forward in precision agriculture. By providing a tailored solution to weed control management, the system optimizes yield potential, reduces labor costs, and minimizes the need for harmful herbicides.

Through ongoing advancements in computer vision technology and machine learning algorithms, FarmWise continues to push the boundaries of automation in agriculture, offering farmers innovative tools to meet the challenges of modern farming. The collaboration with PBC Linear illustrates the importance of partnerships in developing tailored solutions that drive progress in the agricultural sector.

The post Inside the development of FarmWise’s weeding robot appeared first on The Robot Report.

]]>
https://www.therobotreport.com/inside-the-development-of-farmwises-weeding-robot/feed/ 0
RBR50 Spotlight: Opteran Mind reverse-engineers natural brain algorithms for mobile robot autonomy https://www.therobotreport.com/rbr50-spotlight-opteran-mind-reverse-engineers-brain-algorithms-mobile-robot-autonomy/ https://www.therobotreport.com/rbr50-spotlight-opteran-mind-reverse-engineers-brain-algorithms-mobile-robot-autonomy/#respond Tue, 11 Jun 2024 14:28:47 +0000 https://www.therobotreport.com/?p=579430 Opteran commercialized its vision-based approach to autonomy by releasing Opteran Mind.

The post RBR50 Spotlight: Opteran Mind reverse-engineers natural brain algorithms for mobile robot autonomy appeared first on The Robot Report.

]]>
RBR50 banner with the Opteran solution.


Organization: Opteran
Country: U.K.
Website: https://opteran.com
Year Founded: 2019
Number of Employees: 11-50
Innovation Class: Technology


Current approaches to machine autonomy require a lot of sensor data and expensive compute and often still fail when exposed to the dynamic nature of the real world, according to Opteran. The company earned RBR50 recognition in 2021 for its lightweight Opteran Development kit, which took inspiration from research into insect intelligence.

rbr50 banner logo.

In December 2023, Opteran commercialized its vision-based approach to autonomy by releasing Opteran Mind. The company, which has a presence in the U.K., Japan, and the U.S., announced that its new algorithms don’t require training, extensive infrastructure, or connectivity for perception and navigation.

This is an alternative to other AI and simultaneous localization and mapping (SLAM), which are based on decades-old models of the human visual cortex, said James Marshall, a professor at the University of Sheffield and chief scientific officer at Opteran. Animal brains evolved to solve for motion first, not points in space, he noted.

Instead, Opteran Mind is a software product that can run with low-cost, 2D CMOS cameras and on low-power compute for non-deterministic path planning. OEMs and systems integrators can build bespoke systems on the reference hardware for mobile robots, aerial drones, and other devices.

“We provide localization, mapping, and collision prediction from robust panoramic, stabilized 3D CMOS camera input,” explained Marshall.

At a recent live demonstration at MassRobotics in Boston, the company showed how a simple autonomous mobile robot (AMR) using Opteran Mind 4.1 could navigate and avoid obstacles in a mirrored course that would normally be difficult for other technologies.

It is currently focusing on automated guided vehicles (AGVs), AMRs, and drones for warehousing, inspection, and maintenance.

“We have the only solution that provides robust localization in challenging environments with scene changes, aliasing, and highly dynamic light using the lowest-cost cameras and compute,” it said.

The company is currently working toward safety certifications and “decision engines,” according to Marshall.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Explore the RBR50 Robotics Innovation Awards 2024.


RBR50 Robotics Innovation Awards 2024

OrganizationInnovation
ABB RoboticsModular industrial robot arms offer flexibility
Advanced Construction RoboticsIronBOT makes rebar installation faster, safer
Agility RoboticsDigit humanoid gets feet wet with logistics work
Amazon RoboticsAmazon strengthens portfolio with heavy-duty AGV
Ambi RoboticsAmbiSort uses real-world data to improve picking
ApptronikApollo humanoid features bespoke linear actuators
Boston DynamicsAtlas shows off unique skills for humanoid
BrightpickAutopicker applies mobile manipulation, AI to warehouses
Capra RoboticsHircus AMR bridges gap between indoor, outdoor logistics
DexterityDexterity stacks robotics and AI for truck loading
DisneyDisney brings beloved characters to life through robotics
DoosanApp-like Dart-Suite eases cobot programming
Electric SheepVertical integration positions landscaping startup for success
ExotecSkypod ASRS scales to serve automotive supplier
FANUCFANUC ships one-millionth industrial robot
FigureStartup builds working humanoid within one year
Fraunhofer Institute for Material Flow and LogisticsevoBot features unique mobile manipulator design
Gardarika TresDevelops de-mining robot for Ukraine
Geek+Upgrades PopPick goods-to-person system
GlidanceProvides independence to visually impaired individuals
Harvard UniversityExoskeleton improves walking for people with Parkinson’s disease
ifm efectorObstacle Detection System simplifies mobile robot development
igusReBeL cobot gets low-cost, human-like hand
InstockInstock turns fulfillment processes upside down with ASRS
Kodama SystemsStartup uses robotics to prevent wildfires
Kodiak RoboticsAutonomous pickup truck to enhance U.S. military operations
KUKARobotic arm leader doubles down on mobile robots for logistics
Locus RoboticsMobile robot leader surpasses 2 billion picks
MassRobotics AcceleratorEquity-free accelerator positions startups for success
MecademicMCS500 SCARA robot accelerates micro-automation
MITRobotic ventricle advances understanding of heart disease
MujinTruckBot accelerates automated truck unloading
MushinyIntelligent 3D sorter ramps up throughput, flexibility
NASAMOXIE completes historic oxygen-making mission on Mars
Neya SystemsDevelopment of cybersecurity standards harden AGVs
NVIDIANova Carter gives mobile robots all-around sight
Olive RoboticsEdgeROS eases robotics development process
OpenAILLMs enable embedded AI to flourish
OpteranApplies insect intelligence to mobile robot navigation
Renovate RoboticsRufus robot automates installation of roof shingles
RobelAutomates railway repairs to overcome labor shortage
Robust AICarter AMR joins DHL's impressive robotics portfolio
Rockwell AutomationAdds OTTO Motors mobile robots to manufacturing lineup
SereactPickGPT harnesses power of generative AI for robotics
Simbe RoboticsScales inventory robotics deal with BJ’s Wholesale Club
Slip RoboticsSimplifies trailer loading/unloading with heavy-duty AMR
SymboticWalmart-backed company rides wave of logistics automation demand
Toyota Research InstituteBuilds large behavior models for fast robot teaching
ULC TechnologiesCable Splicing Machine improve safety, power grid reliability
Universal RobotsCobot leader strengthens lineup with UR30

The post RBR50 Spotlight: Opteran Mind reverse-engineers natural brain algorithms for mobile robot autonomy appeared first on The Robot Report.

]]>
https://www.therobotreport.com/rbr50-spotlight-opteran-mind-reverse-engineers-brain-algorithms-mobile-robot-autonomy/feed/ 0
Unleashing potential: The role of software development in advancing robotics https://www.therobotreport.com/unleashing-potential-software-development-role-advancing-robotics/ https://www.therobotreport.com/unleashing-potential-software-development-role-advancing-robotics/#respond Sun, 09 Jun 2024 15:15:09 +0000 https://www.therobotreport.com/?p=579358 As robotics serves more use cases across industries, hardware and software development should be parallel efforts, says Radixweb.

The post Unleashing potential: The role of software development in advancing robotics appeared first on The Robot Report.

]]>
A robotics strategy should consider software development in parallel, says Radixweb.

A robotics strategy should consider software development in parallel, says Radixweb. Source: Adobe Stock

In today’s fast-tech era, robotics engineering is transforming multiple industrial sectors. From cartesian robots to robotaxis, cutting-edge technologies are automating applications in logistics, healthcare, finance, and manufacturing. Moreover, automation uses modern software to execute multiple tasks or even one specific task with minimal human interference. Hence, software development is a critical player in building these robots.

The growing technology stack in robotics is one reason the software development market is expected to reach a whopping valuation of $1 billion by 2027. The industry involves designing, building, and maintaining software using complex algorithms, machine learning, and artificial intelligence to make operations more efficient and enable autonomous decision making.

Integrating robotics and software development

With the evolution of robotics, this subset of software engineering offers a new era of opportunities. Developers are now working on intelligent machines that can execute multiple tasks with minimal human intervention. Also, new software frameworks power these systems that are designed for them.

From perception and navigation to object recognition and manipulation, as well as higher-level tasks such as fleet management and human-machine interaction, reliable and explainable software is essential to commercially successful systems.

One of the essential functions software engineering is the building and testing of robotics applications. Hence, developers need to simulate real-world scenarios and accumulate insights for testing goals. The goal is to recognize and rectify bugs before implementing apps in a real environment.

In addition, developers should remember that they are building systems to minimize human effort, not just improve industrial efficiency. Their efforts are not just for the sake of novel technologies but to provide economic and social benefits.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Software developers can advance robotics

Integrating software and robotics promises a symbiotic partnership between the two domains. Apart from collaborating on cutting-edge systems, coordinated development efforts enable the following benefits:

  1. Consistency — Robots can be programmed to execute commands with consistency, eradicating human errors caused by distractions or fatigue.
  2. Precision — Advanced algorithms also allow robots to enhancing overall product quality.
  3. Increased speed — Software-driven robots can carry out tasks much faster than human beings, saving time and money in production activities.
  4. Motion planning — Along with modern motors, motion control software allows robots to navigate through complex environments while avoiding potential injuries or collisions.
  5. Minimal risk — Advanced robots can handle tasks that involve high physical risks, extreme temperatures, or exposure to toxic materials, ensuring employees’ safety.
  6. Remote operations — Building advanced software systems for robots enables them to be monitored and controlled remotely, minimizing the need for human workers to be always present in hazardous settings.
  7. AI and machine learning — The integration of AI can help robots understand, learn, adapt, and make independent decisions based on the data collected.
  8. Real-time data analysis — As stationary and mobile platforms, robots can gather large amounts of data during their operations. With the right software, this data can easily be examined in real time to determine areas for improvement.
  9. Scalability — Robot users can use software to scale robot fleets up or down in response to ever-changing business demands, providing operational flexibility.
  10. Reduced downtime — With predictive maintenance software, robots can reliably function for a long time.
  11. Decreased labor costs — Robotics minimizes the requirement for manual labor, reducing the cost of hiring human resources and emphasizing more complex activities that need creativity and critical thinking.

Best practices for integrating software and robots

To fully leverage the benefits of software development for robotics, businesses must adopt effective strategies. Here are a few tailored practice to consider:

  • Design an intuitive user interface for managing and configuring automated processes.
  • Integrate real-time monitoring and reporting functionalities to track the progress of your tasks.
  • Adopt continuous integration practices to integrate code modifications and ensure system durability constantly.
  • Adhere to applicable data-privacy and cybersecurity protocols to maintain client trust.
  • Analyze existing workflows to detect any vulnerabilities and areas for improvement.
  • Use error-handling techniques to handle any unforeseen scenarios.
  • Implement automated testing frameworks to encourage efficient testing.
  • Provide suitable access controls to protect these systems from unauthorized access.
  • Identify the applications that can be automated for a particular market.
  • Break down complicated tasks into teeny-tiny, manageable steps.
  • Perform extensive testing to recognize and rectify any issues or errors.

As robotics finds new use cases, software must evolve so the hardware can satisfy the needs of more industries. For Industry 4.0, software developers are partnering with hardware and service providers to build systems that are easier to build, use, repurpose, and monitor.

Innovative combinations of software  and robotics can result in new levels of autonomy and open new opportunities.

Sarrah Pitaliya, RadixwebAbout the author

Sarrah Pitaliya is vice president of marketing at Radixweb, With a strong hold on market research and end-to-end digital branding strategies, she leads a team focused on corporate rebranding, user experience marketing, and demand generation.

Radixweb is a software development company with offices in the U.S. and India. This entry is reposted with permission.

The post Unleashing potential: The role of software development in advancing robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/unleashing-potential-software-development-role-advancing-robotics/feed/ 0
Investor Dean Drako acquires Cobalt Robotics https://www.therobotreport.com/investor-dean-drako-acquires-cobalt-robotics/ https://www.therobotreport.com/investor-dean-drako-acquires-cobalt-robotics/#respond Wed, 05 Jun 2024 17:06:35 +0000 https://www.therobotreport.com/?p=579305 Cobalt AI is set to expand the use of its human-verified AI technology in various enterprise security applications.

The post Investor Dean Drako acquires Cobalt Robotics appeared first on The Robot Report.

]]>
cobalt robot in a building hallway.

The Cobalt mobile robot features autonomous driving technology, allowing it to navigate through various terrains and obstacles with ease, ensuring constant vigilance without human operation. | Credit: Cobalt robotics

Cobalt Robotics has been acquired by investor Dean Drako, and the name of the firm has been changed to Cobalt AI. Financial terms of the acquisition were not disclosed. The name change was made to more accurately represent the future direction of the company and the products it offers.

Drako is the founder and CEO of Eagle Eye Networks, in addition to a number of other enterprises and side projects. Cobalt AI fits closest to the Eagle Eye Smart Video Surveillance portfolio of solutions.

There are no major changes to Cobalt’s leadership other than Drako serving as the company’s chairman. Ken Wolff, Cobalt’s current CEO, will continue leading the company. The company will also continue to operate as an independent company with its current management team and entire staff.

Cobalt started with mobile robotics

Cobalt Robotics was founded in 2016 as a developer of autonomous mobile robots (AMRs) for security applications. The AMRs were designed to patrol the interior of a facility while actively surveilling activities and remotely monitoring the facilities as an extension of the building’s security.

To meet the growing needs of its corporate customers, Cobalt developed AI-based algorithms for alarm filtering, remote monitoring, sensing, and other autonomous data-gathering functions. In addition to the sensors onboard the Cobalt AMR, the Cobalt Monitoring Intelligence and Cobalt Command Center gather data from a broad range of cameras, access control systems, robots, and other edge devices.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


“The company monitoring and command center technology is a catalyst for a new era of security,” said Drako. “They have created field-proven AI to make security and guarding tremendously more effective and efficient. Furthermore, Cobalt’s open platform strategy, which integrates with a plethora of video and access systems, is aligned with the open product strategy I believe in.”

Drako’s vision for sensor-monitoring AI

In a recent LinkedIn post, Drako explained why he made the deal.

“I did an extensive search, with a goal to acquire the company with the most powerful AI-based enterprise security automation technology in our physical security industry. Cobalt’s AI technologies, including their monitoring and command center solutions, are years ahead — they will be one of the catalysts for a new era of security.

“Importantly, Cobalt’s open platform strategy, which integrates with a wide range of video and access control systems, aligns with the open product strategy I strongly believe in.

“I am working closely with Cobalt AI’s leadership team, as well as infusing significant capital, to quickly scale their ‘human verified AI’ technology across enterprise security applications.”

marketecture diagram of the cobalt ai features.

Cobalt AI is marketing “Human verified AI” to promote human-in-the loop methods of leveraging AI and human-based perception to monitor and interpret security information. | Credit: Dean Drako

“We are thrilled that Dean Drako has acquired Cobalt and will serve as chairman. Dean has invested capital and strategic insights to grow other physical security companies to unicorns and technology leaders in their space,” said Wolff. “We share a mutual vision of the tremendous advantages of automation through AI with human verification.  Drako’s acquisition validates our strategy to improve monitoring, response times and lower costs and also gives us the capital to deliver for our enterprise clients.”

The post Investor Dean Drako acquires Cobalt Robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/investor-dean-drako-acquires-cobalt-robotics/feed/ 0
Vention, NVIDIA partner to bring automation and AI to small manufacturers https://www.therobotreport.com/vention-nvidia-partner-bringing-automation-small-manufacturers/ https://www.therobotreport.com/vention-nvidia-partner-bringing-automation-small-manufacturers/#respond Mon, 03 Jun 2024 19:05:54 +0000 https://www.therobotreport.com/?p=579283 Under the collaboration, Vention and NVIDIA will use AI to create near-accurate digital twins significantly faster and more efficiently.

The post Vention, NVIDIA partner to bring automation and AI to small manufacturers appeared first on The Robot Report.

]]>
Vention's MAP system is a cloud-based command center that can design, automation, deploy, and operate manufacturing equipment.

MAP is a cloud-based command center that can design and manage automated manufacturing workcells. | Source: Vention

Vention Inc. yesterday announced a collaboration with NVIDIA Corp. to bring industrial automation to small and midsize manufacturers. The companies said they plan to use NVIDIA’s artificial intelligence and accelerated computing to advance cloud robotics. 

The partners said they will use AI to create near-accurate digital twins significantly faster and more efficiently. With this technology, manufacturers can efficiently test their projects before they invest, according to Vention and NVIDIA.

The companies said they will jointly develop generative designs for robot cells, co-pilot programming, physics-based simulation, and autonomous robots. 

“The Vention ecosystem with NVIDIA’s robotics technology and AI expertise will help bring pivotal innovation to the manufacturing renaissance and overall industry,” stated Etienne Lacroix, founder and CEO of Vention. “Now, even the most complex use cases can become achievable for small and medium[-size] manufacturers.”

Vention to simplify MAP experience with AI

Vention said its Manufacturing Automation Platform (MAP) allows clients to manage industrial robots directly from their Web browsers. The Montreal-based company said MAP draws on a proprietary dataset of several hundred thousand workcell designs created since its founding in 2016.

The announcement marks a year of collaboration with NVIDIA to apply AI to industrial automation projects. Vention said it intends to use AI to simplify the user experience in the cloud and on the edge.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


NVIDIA to help bring AI to the forefront of manufacturing

NVIDIA said its technology, combined with Vention’s modular hardware and plug-and-play motion control, will bring cutting-edge AI to the forefront of manufacturing. The companies said they aim to widen access to industrial automation for small and midsize manufacturers.

“Vention’s cloud-based robotics platform, powered by NVIDIA AI, will empower industrial equipment manufacturing companies everywhere to seamlessly design, deploy, and operate robot cells, helping drive the industry forward,” stated Deepu Talla, vice president of robotics and edge computing at NVIDIA.

Vention said it is already known for its user-friendly software products and interface, and it expects to announce a number of new products resulting from this collaboration in Q3 of 2024.

This isn’t the first time NVIDIA and Vention have worked together. Vention — along with Solomon, Techman Robot, and Yaskawa –are among the companies using NVIDIA’s Isaac Manipulator for building AI-based robotic arms. 

Vention also recently announced a partnership with Flexxbotics to support robot-driven manufacturing. The companies said their combined offering for robotic workcell digitalization in next-generation machining environments is now available.

The post Vention, NVIDIA partner to bring automation and AI to small manufacturers appeared first on The Robot Report.

]]>
https://www.therobotreport.com/vention-nvidia-partner-bringing-automation-small-manufacturers/feed/ 0
NVIDIA highlights Omniverse, Isaac adoption by robot market leaders https://www.therobotreport.com/nvidia-highlights-omniverse-isaac-adoption-by-market-leaders/ https://www.therobotreport.com/nvidia-highlights-omniverse-isaac-adoption-by-market-leaders/#comments Mon, 03 Jun 2024 00:30:24 +0000 https://www.therobotreport.com/?p=579267 CEO Jenson Huang announced that robotic factories can accelerate industrial digitization with NVIDIA AI and Omniverse.

The post NVIDIA highlights Omniverse, Isaac adoption by robot market leaders appeared first on The Robot Report.

]]>
medical image on left, robot sim on right side.

The NVIDIA Isaac platform powers electronics, healthcare and industrial applications. | Credit: eCential Robotics (left), Amazon Robotics (right)

In addition to artificial intelligence products, NVIDIA Corp. founder and CEO Jensen Huang announced several robotics-related items during his keynote today at COMPUTEX in Taiwan. The company said that many computer manufacturers are producing a new generation of “AI computers” using its chips to enable Omniverse for modeling and business workflows.

Back in April, NVIDIA announced several new robotics-related technologies at the 2024 GPU Technology Conference (GTC). These new products included Project GR00T, Jetson Thor, Isaac Lab, OSMO, Isaac Manipulator, and Isaac Perceptor.

NVIDIA Isaac Perceptor is a new reference workflow for autonomous mobile robots (AMRs) and automated guided vehicles (AGVs). Isaac Manipulator offers new foundation models and a reference workflow for industrial robotic arms.

The company has also updated Jetson for Robotics in NVIDIA JetPack 6.0. It has included NVIDIA Isaac Lab, a lightweight app for robot learning, in NVIDIA Isaac Sim 4.0.

The Santa Clara, Calif.-based company was also a 2024 RBR50 award winner for its Nova Carter reference AMR platform developed with Segway Robotics.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Manufacturers can simulate in Omniverse

While there weren’t any new robotics product announcements, NVIDIA did say that many of its partners are beginning to use AI and Isaac SIM in the design of new manufacturing facilities. Through the creation of a digital twin of the factory floor, these companies are now able to simulate the assembly process by programming and running robots in simulation.

“Everything is going to be robotic. All of the factories will be robotic. The factories will orchestrate robots,” said Huang, reported Digitimes Asia. “And those robots will be building products that are robotic robots interacting with robots, building robotic products.”

He showed video clips that robots are all moving by themselves in the Omniverse, where simulations were implemented through digital twins.

“Generative physical AI can learn skills using reinforcement learning from physics feedback in a simulated world,” Huang said during his keynote. “In these simulation environments, robots learn to make decisions by performing actions in a virtual world that obeys the laws of physics.”

Taiwanese chip manufacturer and NVIDIA partner Foxconn is using Isaac SIM to plan factories that will produce the next generation of NVIDIA processors.

NVIDIA also announced that electronics manufacturers Delta Electronics, Pegatron, and Wistron are using NVIDIA Metropolis, Omniverse, and Isaac to simulate, build, and operate their facilities with “virtual factories.”

simulated image of a Foxconn factory with Fanuc cobots on the manufacturing line.

Foxconn’s factory simulated in Omniverse, featuring AI robots developed by NVIDIA robotics partners. | Credit: Foxconn

Top robot developers use Isaac robotics platform

NVIDIA claimed that top robot developers are using the Isaac robotics platform to create AI-enabled autonomous devices and robots. They included more than a dozen world leaders in the robotics industry, including BYD Electronics, Siemens, Teradyne Robotics, and Intrinsic.

These users are adding NVIDIA Isaac-accelerated libraries, physically-based simulation, and AI models to their software frameworks and robot models. This can make factories, warehouses, and distribution centers more efficient and safer for people who work there, said NVIDIA. It added that the robots can help people with repetitive or very precise tasks.

“The era of robotics has arrived. Everything that moves will one day be autonomous,” said Huang. “We are working to accelerate generative physical AI by advancing the NVIDIA robotics stack, including Omniverse for simulation applications, Project GR00T humanoid foundation models, and the Jetson Thor robotics computer.”

Siemens, a worldwide leader in industrial automation, uses NVIDIA Isaac Sim for its software-in-the-loop capabilities. The company said Isaac technologies speed its development and testing of new robotics skills like SIMATIC Robot PickAI (PRO) and SIMATIC Robot Pack AI.

According to Siemens, the industrial robots can now independently and successfully pick and pack arbitrary goods without human training by using cognitive AI vision software.

“AI-powered robots will accelerate the digital transformation of industry and take over repetitive tasks that were previously impossible to automate so we can unlock human potential for more creative and valuable work,” said Roland Busch, president and CEO at Siemens AG.

Siemens said it also brings vision AI to robots from KUKA, Techman Robot, Universal Robots, and Yaskawa by seamlessly integrating with automation solutions and making it easy to use on an NVIDIA-powered Siemens industrial PC foundation.

Foxconn virtual factory digital twin in six different panels.

Foxconn virtual factory digital twin built using AI, NVIDIA Omniverse, NVIDIA Isaac and NVIDIA Metropolis. | Credit: Foxconn

Intrinsic using Isaac Manipulator to simulate robot gripping

Alphabet software and AI robotics subsidiary Intrinsic, which purchased Open Source Robotics Corporation in late 2022, tested Isaac Manipulator on its robot-agnostic software platform. With Manipulator, Intrinsic showed that a scalable, universal robotic-grasping talent can function across grippers, settings, and objects.

Solomon, Techman Robot, Vention and Yaskawa are among the companies using Isaac Manipulator for building AI-based robotic arms. With partners ADLINK, Advantech, and ONYX, NVIDIA said AI Enterprise on the IGX platform offers edge AI systems meeting strict regulatory standards, essential for medical technology and other industries.

“We couldn’t have found a better collaborator in NVIDIA, who are helping to pave the way for foundation models to have a profound impact on industrial robotics,” stated Wendy Tan White, CEO of Intrinsic. “As our teams work together on integrating NVIDIA Isaac and Intrinsic’s platform, the potential value we can unlock for millions of developers and businesses is immense.”

Over 100 companies are adopting NVIDIA Isaac Sim to simulate, test and validate robotics applications, including Hexagon, Husqvarna Group, and MathWorks. Humanoid robot developers Agility Robotics, Boston Dynamics, Figure AI, Fourier Intelligence, and Sanctuary AI are adopting Isaac Lab.

In addition, NVIDIA noted that robotics developers such as Moon Surgical and the SETI Institute are using NVIDIA Holoscan on the updated IGX Orin platform for sensor processing and deploying AI and high-performance computing for flexible sensor integration and real-time insights.

The post NVIDIA highlights Omniverse, Isaac adoption by robot market leaders appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-highlights-omniverse-isaac-adoption-by-market-leaders/feed/ 2
1X shows advances in voice control, chaining tasks for humanoid robots https://www.therobotreport.com/1x-shows-advances-voice-control-chaining-tasks-humanoid-robots/ https://www.therobotreport.com/1x-shows-advances-voice-control-chaining-tasks-humanoid-robots/#respond Fri, 31 May 2024 18:00:02 +0000 https://www.therobotreport.com/?p=579257 1X Technologies showed advances in AI and teleoperation enable multiple humanoids to conduct a sequence of tasks.

The post 1X shows advances in voice control, chaining tasks for humanoid robots appeared first on The Robot Report.

]]>

For humanoid robots to be useful in household settings, they must learn numerous tasks. 1X Technologies today released a video (above) showing how it is applying artificial intelligence and teleoperation to training its robots and controlling sequences of skills via voice.

“This update showcases progress we’ve made toward longer autonomous behaviors,” said Erik Jang, vice president of AI at 1X Technologies. “We’ve previously shown that our robots were able to pick up and manipulate simple objects, but to have useful home robots, you have to chain tasks together smoothly.”

“In practice, the robot doesn’t always position itself right next to a table, so we need to be able to tell it to adjust its position and then manipulate the object,” he told The Robot Report. “In building out our repertoire of skills, we’re finding a lot of other skills — like getting closer or backing up — that humans can instruct the robots with natural language.”

1X builds single tasks toward a unified model

1X Technologies has been working toward a single neural network to handle a wide range of tasks, but it is starting with training individual models through teleoperation and voice. This marks a change in how the company is approaching training and scaling of capabilities, Jang said.

“Before, we thought of a single model for thousands of tasks, but it’s hard to train for so many skills simultaneously,” he noted. “It’s important to push forward on multiple fronts, so we’ve added a few hundred individual capabilities. Our library of skills is mapped to simple language descriptions.”

1X, which has offices in Sunnyvale, Calif., and Moss, Norway, still plans to work toward a single model for all tasks. It is using “shadow mode” evaluations to compare predictions to a baseline for testing. The company already has generic navigation and manipulation policies, said Jang.

“We can give the robot a goal — ‘Please go to this part of the room’ — and the same neural network can navigate to all parts of the room,” he said. “Tidying up a room involves four primitives: going anywhere in the room, adjusting for position, picking something up, and putting it down.”

1X plans to add skills such as opening doors, drawers, and bottles, and Jang acknowledged that it’s still early days for building them out.

“Autonomy is hard. If a robot has to go to a second task, it has to pick up the slack from the first one,” he said. “For example, if the first robot didn’t get to the right spot next to a table, then the second robot has to stick its arm out further to grab something, and the third task has to compensate even more. Errors tend to compound.”


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Voice interface enables training, higher-level actions

“We’ve built a way for humans to instruct the robots on tasks so that if they make a mistake, the human can dictate what the command should be,” he added. “We use a human in the loop issuing natural-language commands.”

In the video, 1X Technologies showed a person directing multiple robots to perform a sequence of actions with a simple voice command.

“We treat natural-language commands as a new type of action, translating from low-level instructions to higher-level actions,” said Jang. “We’re working toward robots that can work autonomously for long periods of time. Cleaning things often involves interacting with different tools and appliances. To be useful, household robots should not be limited to pick-and-place operations.”

Remote and multi-robot control lead to scalability

1X Technologies has taken the approach of having the same people who gather the data from teleoperation be the ones who train robots for their skills.

“I’m super proud of the work they do,” said Jang. “We’ve closed the loop, and the teleoperators train everything themselves. In this ‘farm-to-table’ approach, they’ve built all the capabilities.”

By showing that users without computer science experience can train robots, 1X said it is removing a bottleneck to scaling.

“In the same way we have operators train low-level skills, we can have them train higher-level ones,” Jang added. “It’s now very clear to us that we can transition away from predicting robot actions at low levels to building agents that can operate at longer horizons.”

“Once we have controls in the language space, it’s not a huge leap to see robots working with Gemini Pro Vision or GPT 4.0 for longer-horizon behaviors,” he said.

By enabling users to set high-level goals for multiple robots, 1X Technologies said it will also allow for more efficient fleet management.

1X Technologies' EVE demonstrates updated AI and voice commands.

EVE demonstrates updated AI and voice commands. Source: 1X Technologies

Humanoids are fast approaching, says 1X

Over the past year, 1X has pivoted from purely commercial deployments with EVE to more diverse settings with NEO. The company raised $100 million in January. When will humanoids using unified AI models be ready for the domestic market?

“I want it to come as fast as possible,” replied Jang. “A lot of people think that general-purpose home or humanoid robots are far away, but they’re probably a lot closer than one thinks.”

Jang asserted that by designing its own actuators, 1X has made NEO to be safe around humans, a prerequisite for household use. The hardware’s ability to compensate also allows the AI to have room for error, he said.

Still, humanoid robot developers have to do more than produce interesting videos, Jang said. They have to demonstrate capabilities in the real world and control costs on the path to commercialization.

“The onus is on us to get away from making videos to making something that people can see in person without hiding actual performance details,” he said. “Not everything with a torso four limbs is a humanoid, and we’ve put a lot of thought about the force, torque, and strength of each. Not all robots are created equal.”

“There’s a sweet spot between overspeccing costs and underspeccing costs, which can hamper the ability to pursue AI or automation in general,” said Jang. “Many of the top humanoid companies are making different choices, and there’s a spectrum between millimeter-level precision on fingers and calibration with cameras to, on the other end, 3D-printed robots. It’s a healthy competition.”

The post 1X shows advances in voice control, chaining tasks for humanoid robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/1x-shows-advances-voice-control-chaining-tasks-humanoid-robots/feed/ 0
Teradyne Robotics names James Davidson chief AI officer https://www.therobotreport.com/teradyne-robotics-names-james-davidson-chief-ai-officer/ https://www.therobotreport.com/teradyne-robotics-names-james-davidson-chief-ai-officer/#respond Fri, 31 May 2024 17:35:35 +0000 https://www.therobotreport.com/?p=579251 The move comes as Teradyne, which owns Universal Robots and Mobile Industrial Robots, has embraced AI as part of its strategy.

The post Teradyne Robotics names James Davidson chief AI officer appeared first on The Robot Report.

]]>

Teradyne Robotics has named James Davidson as its chief artificial intelligence officer, effective May 28, 2024. This move comes as Teradyne Robotics, which owns Universal Robots (UR) and Mobile Industrial Robots (MiR), has embraced AI as part of its strategy.

Davidson most recently served as chief architect at MiR, where he guided the technical direction for the new MiR1200 Pallet Jack. His broad application of AI spans diverse projects, Teradyne pointed out, from implementing Google’s pioneering AI-generated ads and developing healthcare fraud detection systems at MITRE to advancing robotics in various forms.

Davidson’s career spans over 20 years and includes deep expertise in AI and robotics. Initially focused on satellite technologies at Sandia National Laboratories, he shifted to robotics, fueling his passion for the field through doctoral work in reinforcement learning at the University of Illinois. He has held lead research roles at Google Brain/DeepMind and MITRE, where he contributed extensively to both academic research and commercial products. James then embraced entrepreneurship, steering Talos Robotics as CEO and shaping the technological vision of Third Wave Automation as CTO.

“James’ exceptional track record in AI and robotics aligns perfectly with Teradyne Robotics’ mission to revolutionize manufacturing through innovative automation solutions,” said Ujjwal Kumar, group president of Teradyne Robotics. “We are excited to welcome him to our team and are confident that his leadership will drive significant advancements in our AI capabilities.”

a picture of James Davidson, the new chief AI officer at Teradyne Robotics

James Davidson

Kumar keynoted the Robotics Summit & Expo, which is produced by The Robot Report. He talked in part about how AI is enabling advanced robotics to be more productive for small and medium-sized businesses. Teradyne Robotics also highlighted advanced robotics during the opening of its new headquarters in Odense, Denmark. Kumar was joined by Deepu Talla, vice president of robotics and edge computing at NVIDIA, and Rainer Brehm, CEO of Siemens Factory Automation, for a panel discussion on the future of advanced robotics. You can watch the discussion atop this page.

“The advent of generative AI, coupled with simulation and digital twins technology, is at a tipping point right now, and that combination is going to change the trajectory of robotics,” Talla during the discussion.

UR recently integrated NVIDIA’s accelerated computing into its collaborative robot arms (cobots) for path planning 50 to 80 times faster than today’s applications. Teradyne and NVIDIA cited benefits including ease of programming and lower computation time for planning, optimizing, and executing trajectories. For customers, this technology can simplify the setup of common industrial applications, facilitating robot adoption for high-mix, low-volume scenarios.

And MiR uses the NVIDIA Jetson AGX Orin module for AI-powered pallet detection. MiR said this enables it to identify and precisely move objects, navigate autonomously, and operate in complex factory and warehouse environments.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Speaking of AI, OpenAI is best known for ChatGPT and its work on large language models (LLMs). But the San Francisco-based company is returning to its robotics roots after a three-year break. OpenAI shut down its robotics group in July 2021, prior to all of the interest in generative AI.

OpenAI is again hiring again for its robotics team, with an open position for a research robotics engineer. It is looking for someone capable of “training multimodal robotics models to unlock new capabilities for our partners’ robots, research and develop improvements to our core models, including exploring new model architectures, collecting robotics data, and evaluations.”

The post Teradyne Robotics names James Davidson chief AI officer appeared first on The Robot Report.

]]>
https://www.therobotreport.com/teradyne-robotics-names-james-davidson-chief-ai-officer/feed/ 0
OpenAI is restarting its robotics research group https://www.therobotreport.com/openai-is-restarting-its-robotics-research-group/ https://www.therobotreport.com/openai-is-restarting-its-robotics-research-group/#respond Fri, 31 May 2024 12:59:59 +0000 https://www.therobotreport.com/?p=579253 OpenAI is creating a new internal robotics research group after pulling back from robotics research in 2021.

The post OpenAI is restarting its robotics research group appeared first on The Robot Report.

]]>
series of images of a robot hand holding a rubics cube.

OpenAI robotics research is resuming, applying generative AI to tasks such as manipulation. | Credit: OpenAI

OpenAI LLC, which is best known for ChatGPT, is restarting its robotics research group. The San Francisco-based company has been a pioneer in generative artificial intelligence and is returning to robotics after a three-year break.

This comes as no surprise, since The Robot Report has reported on several robotics companies working with ChatGPT and large language models (LLMs) over the past year.

The reboot comes after the company shut down its robotics group in July 2021. That shutdown was prior to all of the interest in generative AI after OpenAI released ChatGPT to the world.

When the company shut down its original robotics research group, co-founder Wojciech Zaremba said: “I actually believe quite strongly in the approach that the robotics [team] took in that direction, but from the perspective of AGI [artificial general intelligence], I think that there was actually some components missing. So when we created the robotics [team], we thought that we could go very far with self-generated data and reinforcement learning.”

OpenAI is a 2024 RBR50 award honoree for the innovation of LLMs along with the application programming interfaces (APIs) that have enabled robotics developers to demonstrate interaction between physical robots and the generative AI. In March 2023, OpenAI released the APIs that have facilitated this interaction for the robotics industry.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


OpenAI is hiring a robotics engineer

As first reported in Fortune this week, OpenAI is again hiring again for its robotics team, with an open position for a research robotics engineer. It is looking for someone capable of “training multimodal robotics models to unlock new capabilities for our partners’ robots,  research and develop improvements to our core models, including exploring new model architectures, collecting robotics data, and evaluations.”

“We’re looking for people who have a strong research background, in addition to experience shipping AI applications,” said the company.

OpenAI has also participated as an investor in humanoid developer Figure AI’s Series B fundraising earlier this year. The Figure AI investment indicated that robotics is clearly on the radar for OpenAI.

The post OpenAI is restarting its robotics research group appeared first on The Robot Report.

]]>
https://www.therobotreport.com/openai-is-restarting-its-robotics-research-group/feed/ 0
2024 edition of U.S. robotics roadmap points to need for more federal coordination https://www.therobotreport.com/2024-edition-of-u-s-robotics-roadmap-points-to-need-for-more-federal-coordination/ https://www.therobotreport.com/2024-edition-of-u-s-robotics-roadmap-points-to-need-for-more-federal-coordination/#respond Thu, 30 May 2024 16:40:12 +0000 https://www.therobotreport.com/?p=579223 The 2024 edition of "A Roadmap for U.S. Robotics" calls for a more strategic approach and retraining for economic growth.

The post 2024 edition of U.S. robotics roadmap points to need for more federal coordination appeared first on The Robot Report.

]]>
Cover of the 2024 U.S. robotics roadmap.

Cover of the 2024 U.S. robotics roadmap. Source: UC San Diego

Unlike China, Germany, or Japan, the U.S. doesn’t have a centralized industrial policy. The U.S. has a culture that uniquely encourages innovation, but a lack of strong coordination among academia, government, and industry affects the development and deployment of technologies such as robotics, according to the 2024 edition of “A Roadmap for U.S. Robotics: Robotics for a Better Tomorrow.”

The quadrennial set of recommendations is produced and sponsored by institutions led by the University of California, San Diego. The authors sent the latest edition to presidential campaigns, the AI Caucus in Congress, and the investment community, noted Henrik I . Christensen, main editor of the roadmap.

Henrik Christensen, UC San Diego

Henrik Christensen, UC San Diego

Christensen is the Qualcomm Chancellor’s Chair of Robot Systems and a distinguished professor of computer science at the Department of Computer Science and Engineering at UC San Diego. He is also the director of the Contextual Robotics Institute, the Cognitive Robotics Laboratory, and the Autonomous Vehicle Laboratory.

In addition, Christensen is a serial entrepreneur, co-founding companies including Robust.AI. He is also an investor through firms such as ROBO Global, Calibrate Ventures, Interwoven, and Spring Mountain Capital.

The Robot Report spoke with Christensen about the latest “Roadmap for U.S. Robotics.”

Robotics roadmap gives a mixed review

How does this year’s roadmap compare with its predecessors?

Christensen: We’ve been doing this since 2009 and have aligned it to the federal elections. We did do a midterm report in 2022, and the current report card is mixed.

For instance, we’ve seen investments in laboratory automation and anticipated the workforce shortage because of demographics and changes in immigration policies. The COVID-19 pandemic also accelerated interest in e-commerce, supply chain automation, and eldercare.

The government support has been mixed. The National Robotics Initiative has sunset, and there have been no meetings of the Congressional Caucus on Robotics since 2019. Recently, we did have a robot showcase with the Congressional Caucus for AI.

With all of the recent attention on artificial intelligence, how does that help or hurt robotics?

Christensen: Some of the staffers of the AI caucus used to go to robotics caucus meetings. The AI initiative created about six years ago rolled up robotics, but in the end without any new funding for robotics.

Robotics, in many respects, is where AI meets reality. With the workforce shortage, there is a dire need for new robot technology to ensure growth of the U.S. economy.

We’ve heard that reshoring production is part of the answer, but it’s not clear that there must be a corresponding investment in R&D to make it happen. Without a National Robotics Initiative, there’s also no interagency coordination.

Carnegie Mellon University co-hosted a Congressional robotics showcase along with the release of the 2024 U.S. Robotics Roadmap.

CMU co-hosted a Senate Robotics Showcase and Demo Day. Graduate student Richard Desatnik demonstrated a glove that remotely operated a soft robot on table. Source: Carnegie Mellon University

Christensen calls for more federal coordination

Between corporations, academic departments, and agencies such as DARPA and NASA, isn’t there already investment in robotics research and development?

Christensen: Multiple agencies sponsor robotics, in particular in the defense sector. The foundational research is mainly sponsored by the National Science Foundation, and the programs come across uncoordinated.

The roadmap isn’t asking for more money for robotics R&D; it’s recommending that available programs be better coordinated and directed toward more widespread industrial and commercial use.

While venture capital has been harder to get in the past few years, how would you describe the U.S. startup climate?

Christensen: We’re seeing a lot of excitement in robotics, with companies like Figure AI. While resources have gone into fundamental research, we need an full applications pipeline and grounded use cases.

Right now, most VCs are conservative, and interest rates have made it harder to get money. Last year, U.S. industrial automation was down 30%, which has been a challenge for robotics.

Why do you think that happened?

Christensen: It was a combination of factors, including COVID. Companies over-invested based on assumptions but then couldn’t invest in infrastructure. Investment in facilities is limited until we get better interest rates.

The 2024 U.S. Roadmap for Robotics shared data from the IFR and BLS.

The latest robotics roadmap said both automation and employment lead to economic growth, as shown by data from the International Federation of Robotics and the Bureau of Labor Statistics. Click here to enlarge. Source: “A Roadmap for U.S. Robotics”

The U.S. can regain robotics leadership

When do you think that might turn around? What needs to happen?

Christensen: In the second half of the year, robotics could pick up quickly. More things, like semiconductors, are moving back to the U.S., and manufacturing and warehousing are short by millions of workers.

Reshoring hasn’t happened at scale, and there’s not enough R&D, but the U.S. also needs to retrain its workforce. There are a few trade schools with a robotics focus, and we need the federal government to assist in emphasizing the need for retraining to allow more reshoring.

What other enabling factors are needed in Washington?

Christensen: The OSTP [White House Office of Science and Technology] had limited staffing in the previous administration, and we can’t afford another two years of that. We need to hold Washington accountable, and the U.S. industrial sector needs agility.

The robotics community has a big challenge to educate people about the state of the industry. Americans think we’re better than we actually are. We’re not in the top five automotive producers; it’s actually China, Japan, Germany, South Korea, and India. No major industrial robotics supplier is based in the U.S.

When we started these roadmaps, the U.S. was in the top four in industrial robot consumption and a leader in service robotics. Now, it’s no longer in the top 10.

The future for iRobot, the only U.S. household name in robotics, isn’t pretty after its deal fell through with Amazon, at least partly because of antitrust scrutiny. We need to assist our companies to remain competitive.

How might the U.S. get its act together with regard to robotics policy? Australia just launched its own National Robotics Strategy.

Christensen: We shouldn’t let robotics go. I left Denmark about 30 years ago, and the robotics cluster there started after Maersk moved its shipyard to South Korea. The city of Odense and local universities, with national government support, all invested in an ecosystem that led to the formation of Universal Robots and Mobile Industrial Robots. Today, Odense is the capital of robotics in Europe.

Recently, the Port of Odense launched a robotics center for large structures. It continues to grow its ecosystem. It shows why it’s worth it for nations to think strategically about robotics.

We’re in talks to revitalize the Congressional Robotics Caucus and with Robust.AI. We can also show how the advances in AI can help grow robotics.

Manufacturing job openings currently exceed unemployment rates.

Manufacturing job openings currently exceed unemployment rates. Source: BLS.gov

The post 2024 edition of U.S. robotics roadmap points to need for more federal coordination appeared first on The Robot Report.

]]>
https://www.therobotreport.com/2024-edition-of-u-s-robotics-roadmap-points-to-need-for-more-federal-coordination/feed/ 0