Design / Development Archives - The Robot Report https://www.therobotreport.com/category/design-development/ Robotics news, research and analysis Wed, 26 Jun 2024 17:58:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Design / Development Archives - The Robot Report https://www.therobotreport.com/category/design-development/ 32 32 Waymo ends waitlist, opens robotaxi service to all in San Francisco https://www.therobotreport.com/waymo-ends-waitlist-opens-robotaxi-service-all-san-francisco/ https://www.therobotreport.com/waymo-ends-waitlist-opens-robotaxi-service-all-san-francisco/#respond Tue, 25 Jun 2024 19:49:46 +0000 https://www.therobotreport.com/?p=579568 Before today, Waymo has welcomed new riders incrementally in the city, and now it's opening it up to everyone.

The post Waymo ends waitlist, opens robotaxi service to all in San Francisco appeared first on The Robot Report.

]]>
A person with a bag walking towards a Waymo robotaxi.

Waymo is ditching its waitlist and allowing anyone to hail a Waymo robotaxi in San Francisco. | Source: Waymo

Starting today, anyone in San Francisco can hail a robotaxi using Waymo LLC’s app. The company has been operating in the city for years now, slowing scaling its operations. In total, nearly 300,000 people, including those who live, work, and visit San Francisco, have signed up to ride since the company first opened its waitlist. 

Before today, Waymo has welcomed new riders incrementally in the city, and now it’s opening its services to everyone. The Mountain View, Calif.-based Alphabet Inc. subsidiary said in a blog post that it is already completing tens of thousands of weekly trips in San Francisco. The company claimed that its Waymo One service provides safe, sustainable, and reliable transportation to locals and visitors to the city. 

“I’m thankful to be living in a city that embraces technology when it can improve our lives with convenient and safe modes of transit,” stated Michelle Cusano, Executive Director at The Richmond Neighborhood Center.

Waymo has been hard at work expanding its robotaxi operations in several cities this year. Earlier this month, it expanded its service in Phoenix, its largest service area. The company added 90 square miles (233 sq. km) to what was already its largest service area in metropolitan Phoenix.

Waymo said its riders can now hail Waymo One service across 315 square miles (815.8 sq. km) of the Valley. The expanded service area covers more of Scottsdale’s resorts and expands to downtown Mesa, Ariz. This gives riders access to desert attractions, golf courses, and downtown destinations such as the Mesa Arts Center and Pioneer Park.

How are riders using Waymo One in SF?

Waymo recently conducted a rider survey to learn about where its users are going in its robotaxis. The company reported that about 30% of its rides in San Francisco are to local businesses.

In addition, over half of the riders said they’ve used Waymo in the past couple of months to travel to or from medical appointments. The company asserted that this highlights the value of personal space during these trips. 

Interestingly, 36% of riders in San Francisco said they used Waymo to connect to other forms of transit, like BART or Muni. 

“I enjoy riding in Waymo cars and appreciate the ease of transportation,” said Charles Renfroe, development manager at Openhouse SF. “Members of our community, especially transgender and gender non-conforming folks, don’t have to worry about being verbally assaulted or discriminated against when riding with Waymo.”

Waymo’s fleet is all-electric and sources 100% renewable energy from the City’s CleanPowerSF program. Since the beginning of its commercial operations in August 2023, the company said its rides have helped curb carbon emissions by an estimated 570,000 kg (628,317 tons).

California Sen. Dave Cortese last week withdrew Senate Bill 915. It would have allowed local governments to restrict and tax autonomous vehicle companies, similar to how conventional taxicab companies are regulated in California.

Robotaxi hits rough roads in Phoenix

Earlier this month, Waymo issued a voluntary software recall for all of its 672 robotaxis after one autonomously drove into a telephone pole in Phoenix last month. This was Waymo’s second-ever recall.

During the incident, which took place on May 21, an empty Waymo vehicle was driving to pick up a passenger. To get there, it drove through an alley lined on both sides by wooden telephone poles that were level with the road, not up on a curb. The road had longitudinal yellow striping on both sides to indicate the path for vehicles.

As the vehicle pulled over, it struck one of the poles at a speed of 8 mph (12.8 kph), sustaining some damage. No passengers or bystanders were hurt, said Waymo.

After completing the software update, the company filed the recall with the National Highway Traffic Safety Administration (NHTSA). Waymo said this update corrects an error in the software that “assigns a low damage score” to the telephone pole. In addition, it updates the company’s map so its vehicles can better account for the hard road edge in the alleyway that was previously not included.

Waymo’s engineers deployed the recall at its central depot to which its robotaxis regularly return for maintenance and testing. It was not an over-the-air software update.

The post Waymo ends waitlist, opens robotaxi service to all in San Francisco appeared first on The Robot Report.

]]>
https://www.therobotreport.com/waymo-ends-waitlist-opens-robotaxi-service-all-san-francisco/feed/ 0
RTI Connext to deliver real-time data connectivity to NVIDIA Holoscan https://www.therobotreport.com/rti-connext-delivers-real-time-data-connectivity-nvidia-holoscan/ https://www.therobotreport.com/rti-connext-delivers-real-time-data-connectivity-nvidia-holoscan/#respond Tue, 25 Jun 2024 12:00:50 +0000 https://www.therobotreport.com/?p=579541 RTI Connext provides reliable communications for users of NVIDIA's Holoscan SDK to speed development of devices such as surgical robots.

The post RTI Connext to deliver real-time data connectivity to NVIDIA Holoscan appeared first on The Robot Report.

]]>
RTI Connext and NVIDIA Holoscan can help medical device developers.

Medical device developers can now use RTI Connext and NVIDIA Holoscan. Source: Real-Time Innovations

Devices such as surgical robots need access to distributed, reliable, and continuous data streaming across different sensors and devices. Real-Time Innovations, or RTI, today said it is collaborating with NVIDIA Corp. to deliver real-time data connectivity for the NVIDIA Holoscan software development kit with RTI Connext.

“Connectivity is the foundation for cutting-edge technologies, such as AI, that are transforming the medtech industry and beyond,” stated Darren Porras, market development manager for medical at Real-Time Innovations. “We’re proud to work with NVIDIA to harness the transformative power of AI to revolutionize healthcare.”

“By providing competitive, tailored solutions, we are paving the way for sustainable business value across the healthcare, automotive, and industrial sectors, marking an important step toward a future where technology enhances the quality of life and drives innovation,” he added.

Founded in 1991, Real-Time Innovations claimed that it has 2,000 customer designs and that its software runs more than 250 autonomous vehicle programs, controls North America’s largest power plants, and integrates over 400 defense programs. The Sunnyvale, Calif.-based company said its systems also support next-generation medical technologies and surgical robots, Canada’s air traffic control, and NASA’s launch-control systems.

RTI Connext designed to reliably distribute data

The RTI Connext software framework enables users to build intelligent distributed systems that combine advanced sensing, fast control, and artificial intelligence algorithms, said Real-Time Innovations. This can help developers bring capable systems to market faster, it said.

“Connext facilitates interoperable and real-time communication for complex, intelligent systems in the healthcare industry and beyond,” according to RTI. It is based on the Data Distribution Service (DDS) standard and has been proven across industries to reliably communicate data, the company said.

Product teams can now efficiently build and deploy AI-enabled applications and distributed systems that require low-latency and reliable data sharing for sensor and video processing. Connext, which is available for free trials, allows applications to work together as one, said RTI.

NVIDIA Holoscan gets advanced data flows

RTI Connext provides a connectivity framework for the NVIDIA Holoscan software development kit (SDK), offering integration across various systems and sensors to complement its AI capabilities. 

“Enterprises are looking for advanced software-defined architectures that deliver on low latency, flexibility, reliability, scalability, and cybersecurity,” said David Niewolny, director of business development for healthcare and medical at NVIDIA. “With RTI Connext and NVIDIA Holoscan, medical technology developers can accelerate their software-defined product visions by leveraging infrastructure purpose-built for healthcare applications.”

Connext now integrates with NVIDIA’s AI sensor-processing pipelines and reference workflows, bolstering data flows and real-time AI processing across a system of systems. With capabilities for real-time visualization and data-driven insights, the technologies can help drive more precise and automated minimally invasive procedures, clinical monitoring, and next-generation medical imaging platforms. They can also help developers create smarter, integrated systems across industries, said the partners.

NVIDIA said Holoscan offers the software and hardware needed to build AI applications and deploy sensor-processing capabilities from edge to cloud. This can help companies explore new capabilities, accelerate time to market, and lower costs, said the Santa Clara, Calif.-based company.

NVIDIA Holoscan now supports interoperability with a wide range of legacy systems, such as Windows-based medical devices, real-time operating system nodes in surgical robots, and patient-monitoring systems, through RTI Connext.

The post RTI Connext to deliver real-time data connectivity to NVIDIA Holoscan appeared first on The Robot Report.

]]>
https://www.therobotreport.com/rti-connext-delivers-real-time-data-connectivity-nvidia-holoscan/feed/ 0
Wayve launches PRISM-1 4D reconstruction model for autonomous driving https://www.therobotreport.com/wayve-launches-prism-1-4d-reconstruction-model-for-autonomous-driving/ https://www.therobotreport.com/wayve-launches-prism-1-4d-reconstruction-model-for-autonomous-driving/#respond Tue, 18 Jun 2024 17:15:35 +0000 https://www.therobotreport.com/?p=579482 Wayve says PRISM-1 enables scalable, realistic re-simulations of complex scenes with minimal engineering or labeling input. 

The post Wayve launches PRISM-1 4D reconstruction model for autonomous driving appeared first on The Robot Report.

]]>
A scene reconstructed by Wayve's PRISM-1 technology.

A scene reconstructed by Wayve’s PRISM-1 technology. | Source: Wayve

Wayve, a developer of embodied artificial intelligence, launched PRISM-1, a 4D reconstruction model that it said can enhance the testing and training of its autonomous driving technology. 

The London-based company first showed the technology in December 2023 through its Ghost Gym neural simulator. Wayve used novel view synthesis to create precise 4D scene reconstructions (three dimensions in space plus time) using only camera inputs.

It achieved this using unique methods that it claimed will accurately and efficiently simulating the dynamics of complex and unstructured environments for advanced driver-assist systems (ADAS) and self-driving vehicles. PRISM-1 is the model that powers the next generation of Ghost Gym simulations. 

“PRISM-1 bridges the gap between the real world and our simulator,” stated Jamie Shotton, chief scientist at Wayve. “By enhancing our simulation platform with accurate dynamic representations, Wayve can extensively test, validate, and fine-tune our AI models at scale.”

“We are building embodied AI technology that generalizes and scales,” he added. “To achieve this, we continue to advance our end-to-end AI capabilities, not only in our driving models, but also through enabling technologies like PRISM-1. We are also excited to publicly release our WayveScenes101 dataset, developed in conjunction with PRISM-1, to foster more innovation and research in novel view synthesis for driving.”

PRISM-1 excels at realism in simulation, Wayve says

Wayve said PRISM-1 enables scalable, realistic re-simulations of complex driving scenes with minimal engineering or labeling input. 

Unlike traditional methods, which rely on lidar and 3D bounding boxes, PRISM-1 uses novel synthesis techniques to accurately depict moving elements like pedestrians, cyclists, vehicles, and traffic lights. The system includes precise details, like clothing patterns, brake lights, and windshield wipers. 

Achieving realism is critical for building an effective training simulator and evaluating driving technologies, according to Wayve. Traditional simulation technologies treat vehicles as rigid entities and fail to capture safety-critical dynamic behaviors like indicator lights or sudden braking. 

PRISM-1, on the other hand, uses a flexible framework that can identify and track changes in the appearance of scene elements over time, said the company. This enables it to precisely re-simulate complex dynamic scenarios with elements that change in shape and move throughout the scene. 

It can distinguish between static and dynamic elements in a shelf-supervised manner, avoiding the need for explicit labels, scene graphs, and bounding boxes to define the configuration of a busy street.

Wayve said this approach maintains efficiency, even as scene complexity increases, ensuring that more complex scenarios do not require additional engineering effort. This makes PRISM-1 a scalable and efficient system for simulating complex urban environments, it asserted.

WayveScenes 101 benchmark released

Wayve also released its WayveScenes 101 Benchmark. This dataset comprises 101 diverse driving scenarios from the U.K. and the U.S. It includes urban, suburban, and highway scenes over various weather and lighting conditions. 

The company says it aims for this dataset to support the AI research community in advancing novel view synthesis models and the development of more robust and accurate scene representation models for driving. 

Last month, Wayve closed a $1.05 billion Series C funding round. SoftBank Group led the round, which also included new investor NVIDIA and existing investor Microsoft.

Since its founding, Wayve has developed and tested its autonomous driving system on public roads. It has also developed foundation models for autonomy, similar to “GPT for driving,” that it says can empower any vehicle to perceive its surroundings and safely drive through diverse environments. 

The post Wayve launches PRISM-1 4D reconstruction model for autonomous driving appeared first on The Robot Report.

]]>
https://www.therobotreport.com/wayve-launches-prism-1-4d-reconstruction-model-for-autonomous-driving/feed/ 0
Waabi raises $200M from Uber, NVIDIA, and others on the road to self-driving trucks https://www.therobotreport.com/waabi-raises-200m-uber-nvidia-on-the-road-self-driving-trucks/ https://www.therobotreport.com/waabi-raises-200m-uber-nvidia-on-the-road-self-driving-trucks/#respond Tue, 18 Jun 2024 12:40:06 +0000 https://www.therobotreport.com/?p=579477 Waabi, which has been developing self-driving trucks using generative AI, plans to put its systems on Texas roads in 2025.

The post Waabi raises $200M from Uber, NVIDIA, and others on the road to self-driving trucks appeared first on The Robot Report.

]]>
The Waabi Driver includes a generative AI stack as well as sensors and compute hardware.

The Waabi Driver includes a generative AI stack as well as sensors and compute hardware. Source: Waabi

Autonomous passenger vehicles have hit potholes over the past few years, with accidents leading to regulatory scrutiny, but investment in self-driving trucks has continued. Waabi today announced that it has raised $200 million in an oversubscribed Series B round. The funding brings total investment in the Toronto-based startup to more than $280 million.

Waabi said that it “is on the verge of Level 4 autonomy” and that it expects to deploy fully autonomous trucks in Texas next year. The company claimed that it has been able to advance quickly toward that goal because of its use of generative artificial intelligence in the physical world.

“I have spent most of my professional life dedicated to inventing new AI technologies that can deliver on the enormous potential of AI in the physical world in a provably safe and scalable way,” stated Raquel Urtasun, a professor at the University of Toronto and founder and CEO of Waabi.

“Over the past three years, alongside the incredible team at Waabi, I have had the chance to turn these breakthroughs into a revolutionary product that has far surpassed my expectations,” she added. “We have everything we need — breakthrough technology, an incredible team, and pioneering partners and investors — to launch fully driverless autonomous trucks in 2025. This is monumental for the industry and truly marks the beginning of the next frontier for AI.”

Waabi uses generative AI to reduce on-road testing

Waabi said it is pioneering generative AI for the physical world, starting with applying the technology to self-driving trucks. The company said it has developed “a single end-to-end AI system that is capable of human-like reasoning, enabling it to generalize to any situation that might happen on the road, including those it has never seen before.”

Because of that ability to generalize, the system requires significantly less training data and compute resources in comparison with approaches to autonomy, asserted Waabi. In addition, the company claimed that its system is fully interpretable and that its safety can be validated and verified.

The company said Copilot4D, its “end-to-end AI system, paired with Waabi World, the world’s most advanced simulator, reduces the need for extensive on-road testing and enables a safer, more efficient solution that is highly performant and scalable from Day 1.”

Several industry observers have pointed out that self-driving trucks will likely arrive on public roads before widespread deployments of robotaxis in the U.S. While Waymo has pumped the brakes on development, other companies have made progress, including Inceptio, FERNRIDE, Kodiak Robotics, and Aurora.

At the same time, work on self-driving cars continues, with Wayve raising $1.05 billion last month and TIER IV obtaining $54 million. General Motors invested another $850 million in Cruise yesterday.

“Self-driving technology is a prime example of how AI can dramatically improve our lives,” said AI luminary Geoff Hinton. “Raquel and Waabi are at the forefront of innovation, developing a revolutionary approach that radically changes the way autonomous systems work and leads to safer and more efficient solutions.”

Waabi plans to expand its commercial operations and grow its team in Canada and the U.S. The company cited recent accomplishments, including the opening of its new Texas AV trucking terminal, a collaboration with NVIDIA to integrate NVIDIA DRIVE Thor into the Waabi Driver, and its ongoing partnership with Uber Freight. It has run autonomous shipments for Fortune 500 companies and top-tier shippers in Texas.

Copilot4D predicts future LiDAR point clouds from a history of past LiDAR observations, akin to how LLMs predict the next word given the preceding text. We design a 3 stage architecture that is able to exploit all the breakthroughs in LLMs to bring the first 4D foundation model.

Copilot4D predicts future lidar point clouds from a history of past observations, similar to how large language models (LLMs) predict the next word given the preceding text. Source: Waabi

Technology leaders invest in self-driving trucks

Waabi noted that top AI, automotive, and logistics enterprises were among its investors. Uber and Khosla Ventures led Waabi’s Series B round. Other participants included NVIDIA, Volvo Group Venture Capital, Porsche Automobil Holding, Scania Invest, and Ingka Investments.

“Waabi is developing autonomous trucking by applying cutting-edge generative AI to the physical world,” said Jensen Huang, founder and CEO of NVIDIA. “I’m excited to support Raquel’s vision through our investment in Waabi, which is powered by NVIDIA technology. I have championed Raquel’s pioneering work in AI for more than a decade. Her tenacity to solve the impossible is an inspiration.”

Additional support came from HarbourVest Partners, G2 Venture Partners, BDC Capital’s Thrive Venture Fund, Export Development Canada, Radical Ventures, Incharge Capital, and others.

“We are big believers in the potential for autonomous technology to revolutionize transportation, making a safer and more sustainable future possible,” added Dara Khosrowshahi, CEO of Uber. “Raquel is a visionary in the field, and under her leadership, Waabi’s AI-first approach provides a solution that is extremely exciting in both its scalability and capital efficiency.”

Vinod Khosla, founder of Khosla Ventures, said: “Change never comes from incumbents but from the innovation of entrepreneurs that challenge the status quo. Raquel and her team at Waabi have done exactly that with their products and business execution. We backed Waabi very early on with the bet that generative AI would transform transportation and are thrilled to continue on this journey with them as they move towards commercialization.”

The post Waabi raises $200M from Uber, NVIDIA, and others on the road to self-driving trucks appeared first on The Robot Report.

]]>
https://www.therobotreport.com/waabi-raises-200m-uber-nvidia-on-the-road-self-driving-trucks/feed/ 0
At CVPR, NVIDIA offers Omniverse microservices, shows advances in visual generative AI https://www.therobotreport.com/nvidia-offers-omniverse-microservices-advances-visual-generative-ai-cvpr/ https://www.therobotreport.com/nvidia-offers-omniverse-microservices-advances-visual-generative-ai-cvpr/#respond Mon, 17 Jun 2024 13:00:07 +0000 https://www.therobotreport.com/?p=579457 Omniverse Cloud Sensor RTX can generate synthetic data for robotics, says NVIDIA, which is presenting over 50 research papers at CVPR.

The post At CVPR, NVIDIA offers Omniverse microservices, shows advances in visual generative AI appeared first on The Robot Report.

]]>
NVIDIA Omniverse Cloud Sensor RTX Generates Synthetic Data to Speed AI Development of Autonomous Vehicles, Robotic Arms, Mobile Robots, Humanoids and Smart Spaces

As shown at CVPR, Omniverse Cloud Sensor RTX microservices generate high-fidelity sensor simulation from
an autonomous vehicle (left) and an autonomous mobile robot (right). Sources: NVIDIA, Fraunhofer IML (right)

NVIDIA Corp. today announced NVIDIA Omniverse Cloud Sensor RTX, a set of microservices that enable physically accurate sensor simulation to accelerate the development of all kinds of autonomous machines.

NVIDIA researchers are also presenting 50 research projects around visual generative AI at the Computer Vision and Pattern Recognition, or CVPR, conference this week in Seattle. They include new techniques to create and interpret images, videos, and 3D environments. In addition, the company said it has created its largest indoor synthetic dataset with Omniverse for CVPR’s AI City Challenge.

Sensors provide industrial manipulators, mobile robots, autonomous vehicles, humanoids, and smart spaces with the data they need to comprehend the physical world and make informed decisions.

NVIDIA said developers can use Omniverse Cloud Sensor RTX to test sensor perception and associated AI software in physically accurate, realistic virtual environments before real-world deployment. This can enhance safety while saving time and costs, it said.

“Developing safe and reliable autonomous machines powered by generative physical AI requires training and testing in physically based virtual worlds,” stated Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA. “Omniverse Cloud Sensor RTX microservices will enable developers to easily build large-scale digital twins of factories, cities and even Earth — helping accelerate the next wave of AI.”

Omniverse Cloud Sensor RTX supports simulation at scale

Built on the OpenUSD framework and powered by NVIDIA RTX ray-tracing and neural-rendering technologies, Omniverse Cloud Sensor RTX combines real-world data from videos, cameras, radar, and lidar with synthetic data.

Omniverse Cloud Sensor RTX includes software application programming interfaces (APIs) to accelerate the development of autonomous machines for any industry, NVIDIA said.

Even for scenarios with limited real-world data, the microservices can simulate a broad range of activities, claimed the company. It cited examples such as whether a robotic arm is operating correctly, an airport luggage carousel is functional, a tree branch is blocking a roadway, a factory conveyor belt is in motion, or a robot or person is nearby.

Microservice to be available for AV development 

CARLA, Foretellix, and MathWorks are among the first software developers with access to Omniverse Cloud Sensor RTX for autonomous vehicles (AVs). The microservices will also enable sensor makers to validate and integrate digital twins of their systems in virtual environments, reducing the time needed for physical prototyping, said NVIDIA.

Omniverse Cloud Sensor RTX will be generally available later this year. NVIDIA noted that its announcement coincided with its first-place win at the Autonomous Grand Challenge for End-to-End Driving at Scale at CVPR.

The NVIDIA researchers’ winning workflow can be replicated in high-fidelity simulated environments with Omniverse Cloud Sensor RTX. Developers can use it to test self-driving scenarios in physically accurate environments before deploying AVs in the real world, said the company.

Two of NVIDIA’s papers — one on the training dynamics of diffusion models and another on high-definition maps for autonomous vehicles — are finalists for the Best Paper Awards at CVPR.

The company also said its win for the End-to-End Driving at Scale track demonstrates its use of generative AI for comprehensive self-driving models. The winning submission outperformed more than 450 entries worldwide and received CVPR’s Innovation Award.

Collectively, the work introduces artificial intelligence models that could accelerate the training of robots for manufacturing, enable artists to more quickly realize their visions, and help healthcare workers process radiology reports.

“Artificial intelligence — and generative AI in particular — represents a pivotal technological advancement,” said Jan Kautz, vice president of learning and perception research at NVIDIA. “At CVPR, NVIDIA Research is sharing how we’re pushing the boundaries of what’s possible — from powerful image-generation models that could supercharge professional creators to autonomous driving software that could help enable next-generation self-driving cars.”

Foundation model eases object pose estimation

NVIDIA researchers at CVPR are also presenting FoundationPose, a foundation model for object pose estimation and tracking that can be instantly applied to new objects during inference, without the need for fine tuning. The model uses either a small set of reference images or a 3D representation of an object to understand its shape. It set a new record on a benchmark for object pose estimation.

FoundationPose can then identify and track how that object moves and rotates in 3D across a video, even in poor lighting conditions or complex scenes with visual obstructions, explained NVIDIA.

Industrial robots could use FoundationPose to identify and track the objects they interact with. Augmented reality (AR) applications could also use it with AI to overlay visuals on a live scene.

NeRFDeformer transforms data from a single image

NVIDIA’s research includes a text-to-image model that can be customized to depict a specific object or character, a new model for object-pose estimation, a technique to edit neural radiance fields (NeRFs), and a visual language model that can understand memes. Additional papers introduce domain-specific innovations for industries including automotive, healthcare, and robotics.

A NeRF is an AI model that can render a 3D scene based on a series of 2D images taken from different positions in the environment. In robotics, NeRFs can generate immersive 3D renders of complex real-world scenes, such as a cluttered room or a construction site.

However, to make any changes, developers would need to manually define how the scene has transformed — or remake the NeRF entirely.

Researchers from the University of Illinois Urbana-Champaign and NVIDIA have simplified the process with NeRFDeformer. The method can transform an existing NeRF using a single RGB-D image, which is a combination of a normal photo and a depth map that captures how far each object in a scene is from the camera.

NVIDIA researchers have simplified the process of generating a 3D scene from 2D images using NeRFs.

Researchers have simplified the process of generating a 3D scene from 2D images using NeRFs. Source: NVIDIA

JeDi model shows how to simplify image creation at CVPR

Creators typically use diffusion models to generate specific images based on text prompts. Prior research focused on the user training a model on a custom dataset, but the fine-tuning process can be time-consuming and inaccessible to general users, said NVIDIA.

JeDi, a paper by researchers from Johns Hopkins University, Toyota Technological Institute at Chicago, and NVIDIA, proposes a new technique that allows users to personalize the output of a diffusion model within a couple of seconds using reference images. The team found that the model outperforms existing methods.

NVIDIA added that JeDi can be combined with retrieval-augmented generation, or RAG, to generate visuals specific to a database, such as a brand’s product catalog.

JeDi is a new technique that allows users to easily personalize the output of a diffusion model within a couple of seconds using reference images, like an astronaut cat that can be placed in different environments.

JeDi is a new technique that allows users to easily personalize the output of a diffusion model within a couple of seconds using reference images, like an astronaut cat that can be placed in different environments. Source: NVIDIA

Visual language model helps AI get the picture

NVIDIA said it has collaborated with the Massachusetts Institute of Technology (MIT) to advance the state of the art for vision language models, which are generative AI models that can process videos, images, and text. The partners developed VILA, a family of open-source visual language models that they said outperforms prior neural networks on benchmarks that test how well AI models answer questions about images.

VILA’s pretraining process provided enhanced world knowledge, stronger in-context learning, and the ability to reason across multiple images, claimed the MIT and NVIDIA team.

The VILA model family can be optimized for inference using the NVIDIA TensorRT-LLM open-source library and can be deployed on NVIDIA GPUs in data centers, workstations, and edge devices.

As shown at CVPR, VILA can understand memes and reason based on multiple images or video frames.

VILA can understand memes and reason based on multiple images or video frames. Source: NVIDIA

Generative AI drives AV, smart city research at CVPR

NVIDIA Research has hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars, and robotics. A dozen of the NVIDIA-authored CVPR papers focus on autonomous vehicle research.

Producing and Leveraging Online Map Uncertainty in Trajectory Prediction,” a paper authored by researchers from the University of Toronto and NVIDIA, has been selected as one of 24 finalists for CVPR’s best paper award.

In addition, Sanja Fidler, vice president of AI research at NVIDIA, will present on vision language models at the Workshop on Autonomous Driving today.

NVIDIA has contributed to the CVPR AI City Challenge for the eighth consecutive year to help advance research and development for smart cities and industrial automation. The challenge’s datasets were generated using NVIDIA Omniverse, a platform of APIs, software development kits (SDKs), and services for building applications and workflows based on Universal Scene Description (OpenUSD).

AI City Challenge synthetic datasets span multiple environments generated by NVIDIA Omniverse, allowing hundreds of teams to test AI models in physical settings such as retail and warehouse environments to enhance operational efficiency.

AI City Challenge synthetic datasets span multiple environments generated by NVIDIA Omniverse, allowing hundreds of teams to test AI models in physical settings such as retail and warehouse environments to enhance operational efficiency. Source: NVIDIA

Isha Salian headshot.About the author

Isha Salian writes about deep learning, science and healthcare, among other topics, as part of NVIDIA’s corporate communications team. She first joined the company as an intern in summer 2015. Isha has a journalism M.A., as well as undergraduate degrees in communication and English, from Stanford.

The post At CVPR, NVIDIA offers Omniverse microservices, shows advances in visual generative AI appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-offers-omniverse-microservices-advances-visual-generative-ai-cvpr/feed/ 0
Beep deploys autonomous shuttles at Honolulu airport with partners https://www.therobotreport.com/beep-deploys-autonomous-shuttles-honolulu-airport-with-partners/ https://www.therobotreport.com/beep-deploys-autonomous-shuttles-honolulu-airport-with-partners/#respond Sun, 16 Jun 2024 12:00:27 +0000 https://www.therobotreport.com/?p=579428 Beep discusses its pilot of Miki shuttles with Sustainability Partners and the Honolulu DoT at Daniel K. Inouye International Airport.

The post Beep deploys autonomous shuttles at Honolulu airport with partners appeared first on The Robot Report.

]]>
The Beep Miki autonomous shuttles operate at Honolulu's airport.

Wiki Wiki shuttles await passenger riders at the Daniel K. Inouye International Airport (HNL) in Honolulu. Source: Beep

As millions of people in the Northern Hemisphere begin summer vacations, some of them will board autonomous vehicles for part of their journeys. Last month, Beep Inc. announced that it is working with the Hawai’i Department of Transportation, or HDOT, and Sustainability Partners to launch an 18-month self-driving shuttle pilot at Daniel K. Inouye International Airport, formerly HNL.

“Through our partnership with Sustainability Partners, we’re honored that HDOT and HNL have placed their trust in our experience, leadership and differentiated approach of safe and integrated autonomous mobility with the launch of the Miki shuttle pilot service,” stated Joe Moye, CEO of Beep, at the time. “Our fleet of turnkey shared and electric autonomous shuttles prioritizes safety and sustainability while enhancing the airport travel experience for passengers.”

Founded in 2018, Beep said it delivers software and services for next-generation autonomous, shared-mobility systems. The Orlando, Fla.-based company plans, deploys, and manages autonomous shuttles for private and public communities. It also claimed that it continually improves safety and operating capabilities with data from its deployments.

Eduardo Rosa, senior vice president of operations at Beep, answered the following questions about the Honolulu deployment, which the company claimed is the first of its kind:

Autonomous shuttles face crowded airport environs

What are some of the current transport challenges at airports, and specifically at the Daniel K. Inouye International Airport?

Rosa: Between a skyrocketing number of air travelers, traditional transportation networks and staffing issues, many airports around the country are facing significant challenges addressing their transportation needs.

While the Daniel K. Inouye International Airport shares many common transport challenges with other major airports, its unique location and the high volume of tourist traffic presents specific issues that require ongoing attention and improvement.

The Hawaii Department of Transportation, which operates HNL and 14 other airports statewide, is addressing these challenges through an airports modernization program that includes infrastructure upgrades, improved traffic management systems, and enhancements to existing transport links. This includes initiatives such as the pilot program with Beep to see if autonomous shuttles can be incorporated into the airport’s passenger-shuttle operations.

How does the Miki shuttle deal with obstacles such as luggage carts, manual shuttles, and pedestrians?

Rosa: Like all of Beep’s autonomous shared-mobility solutions, the Miki — the Hawaiian word for “agile” — shuttles are equipped with autonomous software and hardware that allow the vehicles to safely navigate any obstacles and operate alongside human-driven vehicles. These technologies, along with an onboard attendant, allow the shuttles to be operated safely in airport and other congested environments like campuses, communities, and public transit.

The Miki shuttles operate as secondary support vehicles to the airport’s existing Wiki Wiki — the Hawaiian word for “fast” — shuttle buses. The vehicles operate along dedicated routes in the airport’s restricted area, where pedestrians, luggage carts and other obstacles do not impede mobility.

Beep leverages its leadership in having run the nation’s largest and most tenured autonomous shuttle service deployments to date—from Hawaii, California, and North Carolina to Florida and more from coast to coast. [It] is entrusted by local governments and transit authorities nationwide.

Our deployments have shuttled tens of thousands of people safely and are allowing us to actively lead the charge in evolving the technology and operations to navigate through complicated, real-world environments. It’s a constant evolution.

How much integration is necessary between the Miki and Wiki Wiki shuttles?

Rosa: With the ability to carry 11 passengers including an attendant, the Miki shuttles are operating on the same routes as the Wiki Wiki shuttles and are easily capable of working alongside and augmenting human-driven routes. In many of Beep’s deployments, shuttles operate safely alongside other drivers and municipal vehicles.

The Miki and Wiki Wiki shuttles transport passengers between Terminals 1 and 2 between 7:00 a.m. and 10:00 p.m. daily.

Beep engaged the Wiki Wiki service early on in the deployment process to ensure seamless integration and coordinated operations of augmented transportation. Beep always maintains continuous radio communications with the buses at all times through our Command Center, allowing for real-time updates and immediate response to any issues, thereby enhancing safety and efficiency.

Most of Beep’s shuttle deployments are designed to integrate into and enhance existing transport systems – reflected strongly by this project being led by the Hawai’i Department of Transportation and facilitated by Sustainability Partners.

Beep is working with the Jacksonville Transportation Authority in Florida for virtual reality command and control.

Beep is working with the Jacksonville Transportation Authority in Florida on the Autonomous Innovation Center. Source: Beep

Beep dedicates staffers to Mika shuttle pilot

Are the shuttles fully autonomous — is there a teleoperation or manual option?

Rosa: The Miki shuttles operate autonomously along a pre-programmed route with an onboard attendant who, at any time, can take control of the shuttle if needed. These staff members are there to educate passengers about autonomous vehicles, as well as serve as an extra set of eyes for safety – in line with our work with NHTSA [National Highway Traffic Safety Administration].

Additionally, technicians in the Beep Command Center at our headquarters in Orlando’s Lake Nona are able to monitor shuttles remotely and can alert the attendant if there is a need for them to take manual control of the vehicle.

Are there dedicated staffers from Beep or HNL onsite during this pilot?

Rosa: Yes, in addition to the attendants who are on board the shuttles while they are in operation. All Beep service deployments are integrated into the communities where we operate with a mobility-as-a-service approach that’s clearly differentiated from other forms of AV implementation. This means everything from education, first responder training to other forms of onsite support.

Beep has launched the AutonomOS service management platform.

Beep has launched the AutonomOS service management platform for rapid deployment and fleet visibility. Source: Beep

Sustainability a goal for autonomous shuttle developers

Is Sustainability Partners providing temporary charging infrastructure? How would that work at scale? Are the Wiki Wiki shuttles electric?

Rosa: The Wiki Wiki buses, HNL’s traditional transportation system, run on gasoline, but the Miki shuttles are all electric. The Hawai’i Department of Transportation, as part of its ongoing sustainability goals, is working to transition its fleet to electric vehicles, and the Miki shuttles are helping them work toward those goals.

Sustainability Partners is helping to advance the state’s electrification mission by facilitating the development of its electric vehicle infrastructure. The Beep shuttles only require a 220v plug to use their chargers, so temporary charging infrastructure is not required. The HNL Airport has ordered 18 electric transit buses as part of its efforts to transition its vehicle fleet to electric vehicles.

What are HDOT and the airport looking for in this pilot? What are their metrics or goals?

Rosa: HDOT and its partners are using the pilot project to evaluate new ways to increase the overall efficiency and augmentation of intra-airport transportation services. This pilot project is also helping HDOT continue testing the viability of electrified mobility as a clean, affordable option to connect passengers and staff to terminals and services.

Enhancing roadway safety is always a common goal between Beep and our partners, and is at the center of everything we do as a company—it is the focal point of planning, deployment and management of our mobility services and a critical component in our partnerships and education.

Beep builds on nationwide experience

How is this project different from Beep’s other deployments across the U.S., and have any of those led to full deployments?

Rosa: This project is a very exciting first use of Beep shuttles in an airport environment, the natural and ideal setting for shared, autonomous mobility systems.

It’s also very similar to many of our other projects spanning across the U.S. Beep has been testing and deploying autonomous shuttles in diverse environments for more than five years. This has brought us unmatched experience in the industry and provides us with the data, insights, and learnings needed to continue to safely advance the use of our shuttle systems in autonomous mobility networks everywhere.

Our leadership in testing and operating autonomous shuttle networks is demonstrated by the operation of the largest and longest tenured autonomous shuttle deployment in the U.S., with five routes serving Lake Nona, Fla.’s medical campus, residential community, business park and entertainment district in the master-planned, 17-sq.-mi. community.

We have also been awarded the nation’s largest public-sector contract for the deployment of autonomous shuttles by the Jacksonville Transportation Authority in Jacksonville, Fla. Beep also operated the first and only federally procured autonomous shuttle deployment serving public passengers at Yellowstone National Park, alongside additional deployments in Arizona, Florida, North Carolina, and Georgia.

We currently have several full deployments in planned developments, college campuses, retail hubs, municipalities and more. These first-mile, last-mile mobility solutions are providing valuable transportation options for passengers, while helping to reduce traffic and congestion where they are operating.

The post Beep deploys autonomous shuttles at Honolulu airport with partners appeared first on The Robot Report.

]]>
https://www.therobotreport.com/beep-deploys-autonomous-shuttles-honolulu-airport-with-partners/feed/ 0
IEEE launches study group to explore and develop humanoid robot standards https://www.therobotreport.com/ieee-launches-study-group-explore-develop-humanoid-robot-standards/ https://www.therobotreport.com/ieee-launches-study-group-explore-develop-humanoid-robot-standards/#respond Fri, 14 Jun 2024 18:55:45 +0000 https://www.therobotreport.com/?p=579424 ASTM International's Aaron Prather will chair the IEEE humanoid study group, which is open to other standards development organizations.

The post IEEE launches study group to explore and develop humanoid robot standards appeared first on The Robot Report.

]]>
IEEE is working on evaluating the need for humanoid robot standards.

NVIDIA CEO Jensen Huang at GTC 2024 with images of many of the humanoids in development. The IEEE study group is evaluating humanoid robots and the need for standards. Credit: Eugene Demaitre

As humanoid robots garner widespread public attention, such systems will also need to stand up to safety and performance standards. IEEE’s Robotics & Automation Society today announced the formation of a new study group that will look into the current humanoid landscape and then develop a roadmap for future standards that various organizations can follow.

Aaron Prather, director of robotics and autonomous systems programs at ASTM International will chair the humanoid study group. The group is open to others across industry, academia, government agencies, and fellow standards development organizations (SDOs), said the Institute for Electrical and Electronics Engineers (IEEE).

IEEE study group has a year to deliver analysis

The IEEE Robotics & Automation Society (RAS) has given the study group up to a year to produce the final deliverables. They include:

  1. A current landscape analysis of standards that can or cannot be applied to humanoid robots. An example of this would be how much of the current ANSI/RIA R15.08 Safety for Industrial Mobile Robots applies to humanoids.
  2. Identify current gaps in the existing standards framework. This includes any gaps in topics ranging from safety to performance that need to be addressed. The group also plans to identify gaps in areas such as industrial use versus home use or service use.
  3. Identify potential roadblocks in addressing the gaps found. This could be due to a lack of information or because not enough research has been done, or the technology is not yet at a state to justify developing a standard at this time.
  4. Develop a roadmap for future standards development that both addresses the gaps as well as mitigates potential roadblocks. The roadmap could also identify which SDOs are best fit to do the necessary work based on the ultimate goal for each standard.

Why develop humanoid standards now?

Interest in humanoid robots has exploded recently. From academics to industry, many people see humanoids the ideal form factor for addressing issues in a world designed around humans. Billions of dollars of both private and public money are being invested into humanoids. However, the lack of standards though could slow this development if not addressed quickly, noted IEEE.

“In the past, standards development organizations would wait to develop a standard until after robot had hit the market,” stated Prather, who will be speaking at RoboBusiness 2024. “However, humanoid robots are being developed so quickly for both the academic lab and the factory and warehouse floors, we really don’t have time to wait until the proverbial robot feet hit the floor.”

“By bringing key stakeholders across the spectrum together now, not only can we identify the current landscape and where the gaps and potential problems are, but we can [also] quickly get a roadmap out on what us SDOs need to work on and cut down on the time standards for humanoids are developed,” he added. “You can find the fastest path to your goal with a map to follow.”

Prather is a notable skeptic of the near-term value of humanoid robotics, but he told The Robot Report that one of the reasons he was asked to lead the IEEE study group was to ensure rigor and impartiality. Prather said he plans to have this study group’s first virtual meeting in July.

There is no limit on the number of participants or on how many people can help produce the final deliverables. However, the organization will give preference to those with crucial knowledge of humanoids and the standards-development process.

Those interested in learning more about the study group and how to get involved can visit this IEEE website: https://www.ieee-ras.org/industry-government/standards/active-projects/study-group-humanoid-robots

 

The post IEEE launches study group to explore and develop humanoid robot standards appeared first on The Robot Report.

]]>
https://www.therobotreport.com/ieee-launches-study-group-explore-develop-humanoid-robot-standards/feed/ 0
Collaborative Robotics expands with new Seattle office and AI team https://www.therobotreport.com/collaborative-robotics-expands-with-new-seattle-office-and-ai-team/ https://www.therobotreport.com/collaborative-robotics-expands-with-new-seattle-office-and-ai-team/#respond Fri, 14 Jun 2024 00:01:53 +0000 https://www.therobotreport.com/?p=579406 Collaborative Robotics has established a foundation models AI team and partnered with the University of Washington on research.

The post Collaborative Robotics expands with new Seattle office and AI team appeared first on The Robot Report.

]]>
pixelated, unrecognizable image of a mobile robot pushing a cart in a warehouse.

Collaborative Robotics has kept its actual robot out of public view. | Source: Adobe Stock, Photoshopped by The Robot Report

Collaborative Robotics, a developer of cobots for logistics, today announced the establishment of a Foundation Models AI team. Michael Vogelsong, a founder of Amazon’s Deep Learning Tech team, will lead the new team in Seattle.

“Our cobots are already doing meaningful work in production on behalf of our customers,” stated Brad Porter, CEO of Collaborative Robotics. “Our investment in building a dedicated foundation models AI team for robotics represents a significant step forward as we continue to increase the collaborative potential of our cobots.”

“The foundation models AI team will explore the cutting-edge possibilities of AI in enhancing robotic capabilities, particularly in the area of bimanual manipulation and low-latency multimodal models,” he added. “We aim to achieve a new level of comprehension and control in our robots, enabling them to understand and respond effectively to complex tasks and environments. I am looking forward to seeing the innovations this talented team creates.”

Collaborative Robotics keeps its system under wraps

In April, Collaborative Robotics closed its $100 million Series B round toward commercializing its autonomous mobile manipulator. The company has been very secretive about the actual design of its system, releasing only scant details about the payload capabilities and the fact that is a wheeled collaborative robot.

At the time, Porter told The Robot Report that the new cobot’s base is capable of omnidirectional motion with four wheels and a swerve-drive design, along with a central tower-like structure that can acquire, carry, and place totes and boxes around a warehouse.

Brad Porter of Collaborative Robotics (far right) participated in a debate on whether humanoid robots are reality or hype at Robotics Invest this week in Boston.

Brad Porter of Collaborative Robotics (far right) participated in a debate on whether humanoid robots are reality or hype at Robotics Invest this week in Boston. Credit: Eugene Demaitre

Foundation AI models coming to robotics

Foundation AI models are currently one of the hottest topics in robotics, with many companies investing in both talent and intellectual property to develop the technology. Foundation models offer the promise of generalizing behaviors and reducing the effort to build and maintain special-purpose models.

Collaborative Robotics said its new Foundation Models AI team will concentrate on integrating advanced machine-learning techniques into its production robots. By combining existing foundation models, novel research, and strategic partnerships with the practical experience from running systems live in production environments, the team aims to improve the adaptability and precision of robotic tasks.

Building on the company’s earlier work in developing an Auditable Control and Planning Framework (ACoP), this research will explore how models that process text, vision, and actions can interact and create a real-time feedback loop for adaptive control.

The company also announced a that it is funding Ph.D. work at the University of Washington through a “significant” gift. This gift will sponsor the research of Prof. Sidd Srinivasa, an academic leader in AI and robotics, who also serves as an advisor to Collaborative Robotics.

“The collaboration with Cobot supports our ongoing research at the University of Washington,” said Srinivasa. “Cobot’s commitment to advancing AI and robotics aligns well with our research goals and will help us advance robotic capabilities across multiple dimensions and particularly in the area of bimanual manipulation. ”

Collaborative Robotics plans this month to open its Seattle office, which will serve as a hub for these advanced research activities. The company said it expects the city’s tech ecosystem to support its expansion and research goals.

The post Collaborative Robotics expands with new Seattle office and AI team appeared first on The Robot Report.

]]>
https://www.therobotreport.com/collaborative-robotics-expands-with-new-seattle-office-and-ai-team/feed/ 0
Inside the development of FarmWise’s weeding robot https://www.therobotreport.com/inside-the-development-of-farmwises-weeding-robot/ https://www.therobotreport.com/inside-the-development-of-farmwises-weeding-robot/#respond Tue, 11 Jun 2024 17:52:16 +0000 https://www.therobotreport.com/?p=579382 Learn how FarmWise overcame the challenges of developing a weeding robot with linear slides, computer vision, AI and more.

The post Inside the development of FarmWise’s weeding robot appeared first on The Robot Report.

]]>

FarmWise is an agtech company pushing the boundaries of automation in agriculture by harnessing the power of computer vision and artificial intelligence (AI). Its flagship product, the Vulcan precision weeding implement, is designed to optimize weed control management on vegetable farms in California, which have been slow to automate due to the complex and versatile nature of specialty crop farming.

By combining cutting-edge technology with custom-built components, FarmWise enhances efficiency, increases crop yields, and addresses labor shortages with a high-accuracy and fully mechanized process to remove weeds.

FarmWise won an RBR50 Robotics Innovation Award for the weeding system in 2021 and will be speaking at RoboBusiness, which runs Oct. 16-17 in Santa Clara, Calif.

Vulcan Automated Weeding System

The Vulcan intra-row weeding implement is FarmWise’s answer to the challenges posed by weed competition in vegetable farms. Weeds can adversely impact crop yield by competing for essential resources such as water, light, and nutrients. Traditional cultivation methods, combined with hand weeding, are labor-intensive and costly, especially in regions like California where labor shortages and rising wages are prevalent.

FarmWise’s Vulcan Automated Weeding System is a pull-behind solution focused on in-season weed control management. The system leverages computer vision and AI to address three key challenges associated with weed removal:

  • Precision
  • Labor
  • Herbicides

Traditional cultivation is imprecise. It either leaves some weeds or only partially removes weeds in between rows of plants. Such cultivators are also cumbersome and error-prone due to a lack of automation and precision control that can lead to mistakes such as crop kills.

Hand weeding is more precise, but it requires time-consuming, physically challenging, and repetitive manual labor that is also expensive for producers. Chemically suppressing weeds has been the most common, efficient, and cost-effective method for controlling weeds in row crops. Using herbicides is becoming less attractive for two major reasons: a shortage of herbicides on the market and the environmental call for farmers to use more sustainable weed control methods.

The Vulcan intra-row weeding implement accurately detects and differentiates crops from weeds, allowing for precision weed removal without damaging crops. This level of precision saves farmers up to $250 per acre, maximizes yield potential, and minimizes the need for expensive manual labor.

Key challenges and customization with PBC linear slides

One of the major challenges FarmWise faced was developing a system capable of adapting to the variety of crops, bed spacings, row spacings, and soil morphologies found on vegetable farms. Compared to corn farming in the Midwest, which has undergone significant automation, vegetable farming in California remains labor-intensive due to its complexity.

To meet this need, FarmWise leveraged advancements in deep learning and precision control software to develop Vulcan, which features a perception module combined with an actuator to perform consistent intra- and inter-row weeding at row level across a diverse portfolio of crops.

The weeder module has two translation axes, including a hydraulic z-axis actuator, allowing it to move up to a dozen inches or so vertically. A feeler wheel arrangement locates the weeder module relative to the crop surface and informs it of changes in the bed’s topology. The balance between automation and user control, however, was critical to the success of this application, according to FarmWise senior mechanical engineer David Olivero.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


“While the goal is to maximize automation, we acknowledge the farmers’ expertise,” Olivero explains. “Farmers understand the optimal depth for weeder blades to effectively remove weeds while avoiding root damage, and we wanted to empower farmers by allowing them to adjust blade depth according to their preference for deeper, more effective weeding or shallower weeding to protect crop roots.

To achieve this flexibility and reach in the z-axis, FarmWise specified UGA Low Profile Uni-Guide with a custom-positioned hand brake from PBC Linear. Up to 18 of these slides, located at the back of the implement, add a few inches of vertical travel. This addition extends bed capabilities and accommodates varying soil types. The slides offer a robust and customizable solution to adjust the system’s height, enabling it to cater to various farm configurations and terrains.

“We appreciated PBC Linear’s customization platform, which allowed them to create a slide with a specific mount offset tailored to our unique requirements,” says Olivero. “The low-profile design of the slides was vital to reduce the cantilever length of the weeder module, mitigating the risk of transport shock during field-to-field movement. PBC Linear’s reputation for quality products and ease of customization made them a preferred choice.”

a comparison of weeds on a farm before and after using FarmWise's Vulcan weeding robot.

A comparison of weeds on a farm before and after using FarmWise’s Vulcan weeding robot. | Credit: FarmWise

Role of computer vision and AI

Central to the Vulcan precision weeding implement’s success is the computer vision and AI in the FarmWise Intelligent Plant System (IPS) Scanner. The IPS Scanner integrates lighting with the camera sensor via a custom LED board. This package enables the capturing of consistent, high-resolution images at a high frame-per-second rate. The data immediately flows through the IPS pipeline, detecting and localizing each plant in real time.

Sophisticated detection models were developed by gathering a vast number of images and annotating them to accurately distinguish between individual crops and weeds. Using these detection models, the system determines the position of crops and the location of every crop stem and makes precise decisions on blade openings and adjustments.

As the system traverses the field, it makes micro-adjustments to ensure the highest quality weed removal. The actuation engine, controlled by the software, opens and closes the weeding blades as needed to clean the intra-row, or in between the crops located on the same line. In addition to the weeding blades that are connected to the actuator, the precision weeding implement includes a set of top knives that simultaneously clean the inter-row surface area between the rows of crops.

Operator interface and user control

FarmWise provides an operator interface mounted in the cab of the equipment. This touch screen–based interface enables the operator to set up and verify the system’s configuration for specific crop and field conditions. The operator can adjust precision, blade widths, and other parameters to achieve the desired results. The interface also offers diagnostics and feedback to fine-tune the system’s performance.

FarmWise’s Vulcan intra-row weeding implement represents a significant step forward in precision agriculture. By providing a tailored solution to weed control management, the system optimizes yield potential, reduces labor costs, and minimizes the need for harmful herbicides.

Through ongoing advancements in computer vision technology and machine learning algorithms, FarmWise continues to push the boundaries of automation in agriculture, offering farmers innovative tools to meet the challenges of modern farming. The collaboration with PBC Linear illustrates the importance of partnerships in developing tailored solutions that drive progress in the agricultural sector.

The post Inside the development of FarmWise’s weeding robot appeared first on The Robot Report.

]]>
https://www.therobotreport.com/inside-the-development-of-farmwises-weeding-robot/feed/ 0
RBR50 Spotlight: Opteran Mind reverse-engineers natural brain algorithms for mobile robot autonomy https://www.therobotreport.com/rbr50-spotlight-opteran-mind-reverse-engineers-brain-algorithms-mobile-robot-autonomy/ https://www.therobotreport.com/rbr50-spotlight-opteran-mind-reverse-engineers-brain-algorithms-mobile-robot-autonomy/#respond Tue, 11 Jun 2024 14:28:47 +0000 https://www.therobotreport.com/?p=579430 Opteran commercialized its vision-based approach to autonomy by releasing Opteran Mind.

The post RBR50 Spotlight: Opteran Mind reverse-engineers natural brain algorithms for mobile robot autonomy appeared first on The Robot Report.

]]>
RBR50 banner with the Opteran solution.


Organization: Opteran
Country: U.K.
Website: https://opteran.com
Year Founded: 2019
Number of Employees: 11-50
Innovation Class: Technology


Current approaches to machine autonomy require a lot of sensor data and expensive compute and often still fail when exposed to the dynamic nature of the real world, according to Opteran. The company earned RBR50 recognition in 2021 for its lightweight Opteran Development kit, which took inspiration from research into insect intelligence.

rbr50 banner logo.

In December 2023, Opteran commercialized its vision-based approach to autonomy by releasing Opteran Mind. The company, which has a presence in the U.K., Japan, and the U.S., announced that its new algorithms don’t require training, extensive infrastructure, or connectivity for perception and navigation.

This is an alternative to other AI and simultaneous localization and mapping (SLAM), which are based on decades-old models of the human visual cortex, said James Marshall, a professor at the University of Sheffield and chief scientific officer at Opteran. Animal brains evolved to solve for motion first, not points in space, he noted.

Instead, Opteran Mind is a software product that can run with low-cost, 2D CMOS cameras and on low-power compute for non-deterministic path planning. OEMs and systems integrators can build bespoke systems on the reference hardware for mobile robots, aerial drones, and other devices.

“We provide localization, mapping, and collision prediction from robust panoramic, stabilized 3D CMOS camera input,” explained Marshall.

At a recent live demonstration at MassRobotics in Boston, the company showed how a simple autonomous mobile robot (AMR) using Opteran Mind 4.1 could navigate and avoid obstacles in a mirrored course that would normally be difficult for other technologies.

It is currently focusing on automated guided vehicles (AGVs), AMRs, and drones for warehousing, inspection, and maintenance.

“We have the only solution that provides robust localization in challenging environments with scene changes, aliasing, and highly dynamic light using the lowest-cost cameras and compute,” it said.

The company is currently working toward safety certifications and “decision engines,” according to Marshall.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Explore the RBR50 Robotics Innovation Awards 2024.


RBR50 Robotics Innovation Awards 2024

OrganizationInnovation
ABB RoboticsModular industrial robot arms offer flexibility
Advanced Construction RoboticsIronBOT makes rebar installation faster, safer
Agility RoboticsDigit humanoid gets feet wet with logistics work
Amazon RoboticsAmazon strengthens portfolio with heavy-duty AGV
Ambi RoboticsAmbiSort uses real-world data to improve picking
ApptronikApollo humanoid features bespoke linear actuators
Boston DynamicsAtlas shows off unique skills for humanoid
BrightpickAutopicker applies mobile manipulation, AI to warehouses
Capra RoboticsHircus AMR bridges gap between indoor, outdoor logistics
DexterityDexterity stacks robotics and AI for truck loading
DisneyDisney brings beloved characters to life through robotics
DoosanApp-like Dart-Suite eases cobot programming
Electric SheepVertical integration positions landscaping startup for success
ExotecSkypod ASRS scales to serve automotive supplier
FANUCFANUC ships one-millionth industrial robot
FigureStartup builds working humanoid within one year
Fraunhofer Institute for Material Flow and LogisticsevoBot features unique mobile manipulator design
Gardarika TresDevelops de-mining robot for Ukraine
Geek+Upgrades PopPick goods-to-person system
GlidanceProvides independence to visually impaired individuals
Harvard UniversityExoskeleton improves walking for people with Parkinson’s disease
ifm efectorObstacle Detection System simplifies mobile robot development
igusReBeL cobot gets low-cost, human-like hand
InstockInstock turns fulfillment processes upside down with ASRS
Kodama SystemsStartup uses robotics to prevent wildfires
Kodiak RoboticsAutonomous pickup truck to enhance U.S. military operations
KUKARobotic arm leader doubles down on mobile robots for logistics
Locus RoboticsMobile robot leader surpasses 2 billion picks
MassRobotics AcceleratorEquity-free accelerator positions startups for success
MecademicMCS500 SCARA robot accelerates micro-automation
MITRobotic ventricle advances understanding of heart disease
MujinTruckBot accelerates automated truck unloading
MushinyIntelligent 3D sorter ramps up throughput, flexibility
NASAMOXIE completes historic oxygen-making mission on Mars
Neya SystemsDevelopment of cybersecurity standards harden AGVs
NVIDIANova Carter gives mobile robots all-around sight
Olive RoboticsEdgeROS eases robotics development process
OpenAILLMs enable embedded AI to flourish
OpteranApplies insect intelligence to mobile robot navigation
Renovate RoboticsRufus robot automates installation of roof shingles
RobelAutomates railway repairs to overcome labor shortage
Robust AICarter AMR joins DHL's impressive robotics portfolio
Rockwell AutomationAdds OTTO Motors mobile robots to manufacturing lineup
SereactPickGPT harnesses power of generative AI for robotics
Simbe RoboticsScales inventory robotics deal with BJ’s Wholesale Club
Slip RoboticsSimplifies trailer loading/unloading with heavy-duty AMR
SymboticWalmart-backed company rides wave of logistics automation demand
Toyota Research InstituteBuilds large behavior models for fast robot teaching
ULC TechnologiesCable Splicing Machine improve safety, power grid reliability
Universal RobotsCobot leader strengthens lineup with UR30

The post RBR50 Spotlight: Opteran Mind reverse-engineers natural brain algorithms for mobile robot autonomy appeared first on The Robot Report.

]]>
https://www.therobotreport.com/rbr50-spotlight-opteran-mind-reverse-engineers-brain-algorithms-mobile-robot-autonomy/feed/ 0
Unleashing potential: The role of software development in advancing robotics https://www.therobotreport.com/unleashing-potential-software-development-role-advancing-robotics/ https://www.therobotreport.com/unleashing-potential-software-development-role-advancing-robotics/#respond Sun, 09 Jun 2024 15:15:09 +0000 https://www.therobotreport.com/?p=579358 As robotics serves more use cases across industries, hardware and software development should be parallel efforts, says Radixweb.

The post Unleashing potential: The role of software development in advancing robotics appeared first on The Robot Report.

]]>
A robotics strategy should consider software development in parallel, says Radixweb.

A robotics strategy should consider software development in parallel, says Radixweb. Source: Adobe Stock

In today’s fast-tech era, robotics engineering is transforming multiple industrial sectors. From cartesian robots to robotaxis, cutting-edge technologies are automating applications in logistics, healthcare, finance, and manufacturing. Moreover, automation uses modern software to execute multiple tasks or even one specific task with minimal human interference. Hence, software development is a critical player in building these robots.

The growing technology stack in robotics is one reason the software development market is expected to reach a whopping valuation of $1 billion by 2027. The industry involves designing, building, and maintaining software using complex algorithms, machine learning, and artificial intelligence to make operations more efficient and enable autonomous decision making.

Integrating robotics and software development

With the evolution of robotics, this subset of software engineering offers a new era of opportunities. Developers are now working on intelligent machines that can execute multiple tasks with minimal human intervention. Also, new software frameworks power these systems that are designed for them.

From perception and navigation to object recognition and manipulation, as well as higher-level tasks such as fleet management and human-machine interaction, reliable and explainable software is essential to commercially successful systems.

One of the essential functions software engineering is the building and testing of robotics applications. Hence, developers need to simulate real-world scenarios and accumulate insights for testing goals. The goal is to recognize and rectify bugs before implementing apps in a real environment.

In addition, developers should remember that they are building systems to minimize human effort, not just improve industrial efficiency. Their efforts are not just for the sake of novel technologies but to provide economic and social benefits.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Software developers can advance robotics

Integrating software and robotics promises a symbiotic partnership between the two domains. Apart from collaborating on cutting-edge systems, coordinated development efforts enable the following benefits:

  1. Consistency — Robots can be programmed to execute commands with consistency, eradicating human errors caused by distractions or fatigue.
  2. Precision — Advanced algorithms also allow robots to enhancing overall product quality.
  3. Increased speed — Software-driven robots can carry out tasks much faster than human beings, saving time and money in production activities.
  4. Motion planning — Along with modern motors, motion control software allows robots to navigate through complex environments while avoiding potential injuries or collisions.
  5. Minimal risk — Advanced robots can handle tasks that involve high physical risks, extreme temperatures, or exposure to toxic materials, ensuring employees’ safety.
  6. Remote operations — Building advanced software systems for robots enables them to be monitored and controlled remotely, minimizing the need for human workers to be always present in hazardous settings.
  7. AI and machine learning — The integration of AI can help robots understand, learn, adapt, and make independent decisions based on the data collected.
  8. Real-time data analysis — As stationary and mobile platforms, robots can gather large amounts of data during their operations. With the right software, this data can easily be examined in real time to determine areas for improvement.
  9. Scalability — Robot users can use software to scale robot fleets up or down in response to ever-changing business demands, providing operational flexibility.
  10. Reduced downtime — With predictive maintenance software, robots can reliably function for a long time.
  11. Decreased labor costs — Robotics minimizes the requirement for manual labor, reducing the cost of hiring human resources and emphasizing more complex activities that need creativity and critical thinking.

Best practices for integrating software and robots

To fully leverage the benefits of software development for robotics, businesses must adopt effective strategies. Here are a few tailored practice to consider:

  • Design an intuitive user interface for managing and configuring automated processes.
  • Integrate real-time monitoring and reporting functionalities to track the progress of your tasks.
  • Adopt continuous integration practices to integrate code modifications and ensure system durability constantly.
  • Adhere to applicable data-privacy and cybersecurity protocols to maintain client trust.
  • Analyze existing workflows to detect any vulnerabilities and areas for improvement.
  • Use error-handling techniques to handle any unforeseen scenarios.
  • Implement automated testing frameworks to encourage efficient testing.
  • Provide suitable access controls to protect these systems from unauthorized access.
  • Identify the applications that can be automated for a particular market.
  • Break down complicated tasks into teeny-tiny, manageable steps.
  • Perform extensive testing to recognize and rectify any issues or errors.

As robotics finds new use cases, software must evolve so the hardware can satisfy the needs of more industries. For Industry 4.0, software developers are partnering with hardware and service providers to build systems that are easier to build, use, repurpose, and monitor.

Innovative combinations of software  and robotics can result in new levels of autonomy and open new opportunities.

Sarrah Pitaliya, RadixwebAbout the author

Sarrah Pitaliya is vice president of marketing at Radixweb, With a strong hold on market research and end-to-end digital branding strategies, she leads a team focused on corporate rebranding, user experience marketing, and demand generation.

Radixweb is a software development company with offices in the U.S. and India. This entry is reposted with permission.

The post Unleashing potential: The role of software development in advancing robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/unleashing-potential-software-development-role-advancing-robotics/feed/ 0
Investor Dean Drako acquires Cobalt Robotics https://www.therobotreport.com/investor-dean-drako-acquires-cobalt-robotics/ https://www.therobotreport.com/investor-dean-drako-acquires-cobalt-robotics/#respond Wed, 05 Jun 2024 17:06:35 +0000 https://www.therobotreport.com/?p=579305 Cobalt AI is set to expand the use of its human-verified AI technology in various enterprise security applications.

The post Investor Dean Drako acquires Cobalt Robotics appeared first on The Robot Report.

]]>
cobalt robot in a building hallway.

The Cobalt mobile robot features autonomous driving technology, allowing it to navigate through various terrains and obstacles with ease, ensuring constant vigilance without human operation. | Credit: Cobalt robotics

Cobalt Robotics has been acquired by investor Dean Drako, and the name of the firm has been changed to Cobalt AI. Financial terms of the acquisition were not disclosed. The name change was made to more accurately represent the future direction of the company and the products it offers.

Drako is the founder and CEO of Eagle Eye Networks, in addition to a number of other enterprises and side projects. Cobalt AI fits closest to the Eagle Eye Smart Video Surveillance portfolio of solutions.

There are no major changes to Cobalt’s leadership other than Drako serving as the company’s chairman. Ken Wolff, Cobalt’s current CEO, will continue leading the company. The company will also continue to operate as an independent company with its current management team and entire staff.

Cobalt started with mobile robotics

Cobalt Robotics was founded in 2016 as a developer of autonomous mobile robots (AMRs) for security applications. The AMRs were designed to patrol the interior of a facility while actively surveilling activities and remotely monitoring the facilities as an extension of the building’s security.

To meet the growing needs of its corporate customers, Cobalt developed AI-based algorithms for alarm filtering, remote monitoring, sensing, and other autonomous data-gathering functions. In addition to the sensors onboard the Cobalt AMR, the Cobalt Monitoring Intelligence and Cobalt Command Center gather data from a broad range of cameras, access control systems, robots, and other edge devices.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


“The company monitoring and command center technology is a catalyst for a new era of security,” said Drako. “They have created field-proven AI to make security and guarding tremendously more effective and efficient. Furthermore, Cobalt’s open platform strategy, which integrates with a plethora of video and access systems, is aligned with the open product strategy I believe in.”

Drako’s vision for sensor-monitoring AI

In a recent LinkedIn post, Drako explained why he made the deal.

“I did an extensive search, with a goal to acquire the company with the most powerful AI-based enterprise security automation technology in our physical security industry. Cobalt’s AI technologies, including their monitoring and command center solutions, are years ahead — they will be one of the catalysts for a new era of security.

“Importantly, Cobalt’s open platform strategy, which integrates with a wide range of video and access control systems, aligns with the open product strategy I strongly believe in.

“I am working closely with Cobalt AI’s leadership team, as well as infusing significant capital, to quickly scale their ‘human verified AI’ technology across enterprise security applications.”

marketecture diagram of the cobalt ai features.

Cobalt AI is marketing “Human verified AI” to promote human-in-the loop methods of leveraging AI and human-based perception to monitor and interpret security information. | Credit: Dean Drako

“We are thrilled that Dean Drako has acquired Cobalt and will serve as chairman. Dean has invested capital and strategic insights to grow other physical security companies to unicorns and technology leaders in their space,” said Wolff. “We share a mutual vision of the tremendous advantages of automation through AI with human verification.  Drako’s acquisition validates our strategy to improve monitoring, response times and lower costs and also gives us the capital to deliver for our enterprise clients.”

The post Investor Dean Drako acquires Cobalt Robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/investor-dean-drako-acquires-cobalt-robotics/feed/ 0
Vention, NVIDIA partner to bring automation and AI to small manufacturers https://www.therobotreport.com/vention-nvidia-partner-bringing-automation-small-manufacturers/ https://www.therobotreport.com/vention-nvidia-partner-bringing-automation-small-manufacturers/#respond Mon, 03 Jun 2024 19:05:54 +0000 https://www.therobotreport.com/?p=579283 Under the collaboration, Vention and NVIDIA will use AI to create near-accurate digital twins significantly faster and more efficiently.

The post Vention, NVIDIA partner to bring automation and AI to small manufacturers appeared first on The Robot Report.

]]>
Vention's MAP system is a cloud-based command center that can design, automation, deploy, and operate manufacturing equipment.

MAP is a cloud-based command center that can design and manage automated manufacturing workcells. | Source: Vention

Vention Inc. yesterday announced a collaboration with NVIDIA Corp. to bring industrial automation to small and midsize manufacturers. The companies said they plan to use NVIDIA’s artificial intelligence and accelerated computing to advance cloud robotics. 

The partners said they will use AI to create near-accurate digital twins significantly faster and more efficiently. With this technology, manufacturers can efficiently test their projects before they invest, according to Vention and NVIDIA.

The companies said they will jointly develop generative designs for robot cells, co-pilot programming, physics-based simulation, and autonomous robots. 

“The Vention ecosystem with NVIDIA’s robotics technology and AI expertise will help bring pivotal innovation to the manufacturing renaissance and overall industry,” stated Etienne Lacroix, founder and CEO of Vention. “Now, even the most complex use cases can become achievable for small and medium[-size] manufacturers.”

Vention to simplify MAP experience with AI

Vention said its Manufacturing Automation Platform (MAP) allows clients to manage industrial robots directly from their Web browsers. The Montreal-based company said MAP draws on a proprietary dataset of several hundred thousand workcell designs created since its founding in 2016.

The announcement marks a year of collaboration with NVIDIA to apply AI to industrial automation projects. Vention said it intends to use AI to simplify the user experience in the cloud and on the edge.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


NVIDIA to help bring AI to the forefront of manufacturing

NVIDIA said its technology, combined with Vention’s modular hardware and plug-and-play motion control, will bring cutting-edge AI to the forefront of manufacturing. The companies said they aim to widen access to industrial automation for small and midsize manufacturers.

“Vention’s cloud-based robotics platform, powered by NVIDIA AI, will empower industrial equipment manufacturing companies everywhere to seamlessly design, deploy, and operate robot cells, helping drive the industry forward,” stated Deepu Talla, vice president of robotics and edge computing at NVIDIA.

Vention said it is already known for its user-friendly software products and interface, and it expects to announce a number of new products resulting from this collaboration in Q3 of 2024.

This isn’t the first time NVIDIA and Vention have worked together. Vention — along with Solomon, Techman Robot, and Yaskawa –are among the companies using NVIDIA’s Isaac Manipulator for building AI-based robotic arms. 

Vention also recently announced a partnership with Flexxbotics to support robot-driven manufacturing. The companies said their combined offering for robotic workcell digitalization in next-generation machining environments is now available.

The post Vention, NVIDIA partner to bring automation and AI to small manufacturers appeared first on The Robot Report.

]]>
https://www.therobotreport.com/vention-nvidia-partner-bringing-automation-small-manufacturers/feed/ 0
Adapta Robotics execs explain development strategies for testing and inventory robots https://www.therobotreport.com/adapta-robotics-execs-explain-development-strategies-for-testing-and-inventory-robots/ https://www.therobotreport.com/adapta-robotics-execs-explain-development-strategies-for-testing-and-inventory-robots/#respond Mon, 03 Jun 2024 17:46:59 +0000 https://www.therobotreport.com/?p=579282 Adapta Robotics grew out of a university competition team, and the Romanian startup identified electronics and retail as markets.

The post Adapta Robotics execs explain development strategies for testing and inventory robots appeared first on The Robot Report.

]]>
Adapta has developed robots for specific use cases.

Adapta has developed robots for specific use cases. Source: Adapta Robotics

Starting a robotics company is always challenging, as inventors and entrepreneurs scramble to get access to capital and tap the local talent pool. However, identifying the right applications and markets is a good place to begin, according to the co-founders of Adapta Robotics & Engineering SRL. 

The Bucharest, Romania-based company said it specializes in addressing challenges in settings where traditional automation has fallen short and in use cases that have been overlooked. Adapta’s model includes a one-time robot purchase fee, plus annual license and maintenance fees.

In 2017, the company’s founders developed the first prototype of MATT, a delta robot for device testing, at rinf.tech with European Union support. In 2021, they branded Adapta and developed the Effective Retail Intelligent Scanner, or ERIS, for scanning items on store shelves. In 2022, Adapta became an independent brand.

Mihai Craciunescu, co-founder and CEO, and Diana Baicu, co-founder and lead robotics engineer of Adapta Robotics, spoke with The Robot Report about the company’s approach to designing and customizing robots for applications that were previously difficult to automate.

Adapta Robotics started with competition team

What is the origin of Adapta Robotics?

Craciunescu: Cristian Dobre, Diana, and I started the company in 2015, but we had been a group in the University Politehnica of Bucharest, the largest technical university in Romania, starting in 2012. Our goal was to create robots for competitions, and we participated in competitions from Europe to Turkey to China.

We won most of them, and we’ve built line-following robots, sumo robots, and self-driving cars in a small scale for Continental. We then said, “OK, what’s next?” We had to choose between pure research in academia or starting a robotics company, and we wanted the more applied side of robotics, based on our success in those challenges.

How did you determine what tasks or applications to try to automate? As we saw at R-24 and other events, there are already a lot of robots out there, from disinfection to materials handling.

Craciunescu: Our first idea, the MATT testing robot, came totally by chance. We were exposed to a U.S. manufacturer that did its software development and testing in Romania before pushing updates to its whole fleet of mobile phones. Phones in the Asian market were a testing ground for it, and all of its software testing processes were automated.

During one test, the screen went blank, but behind the scenes, the processor was still doing the right tasks. The company didn’t catch it, and millions of phones went blank. Knowing about this issue and having the robotics experience we did, we said, “Why don’t we build a robot to test these phones?”

Then, we slowly saw similar needs in other industries like automotive manufacturing. Infotainment systems need to be tested, and automakers want to make sure everything works as intended. Many other use cases derive from that.

We identified the problems and clients, and then we did a bit of market research. There were a couple of competitors, but they were very expensive and had limited capabilities.

How did you arrive at inventory with ERIS?

Craciunescu: We had a client with a couple of issues in its stores. One, some products had labels showing the wrong prices.

Another was that [the retailer] knew from its systems that it had a certain amount of products in stock, but it did not know if those products were on the shelf or in the warehouse somewhere. If they’re not on the shelf, that means lost opportunities.

The company was aware of the solutions on the market, including inventory robots from the States, but those were too expensive for the task it wanted to do. And, if you scan a shelf with an autonomous robot, and you have a report that 20 labels are wrong, you still need a human to manually replace the labels.

The company didn’t care about the autonomous part; it just wanted the problem to be fixed, so we have a scanner on a pushcart. We focused on reading the labels correctly, detecting the products that are out of stock or soon will be, and creating a report on just those three features.

We’re also exploring other functionalities like planogram compliance, making sure the products are placed where they’re supposed to be, while checking if you have multiple labels of the same product displayed on the shelf. But that was the main idea: Create a relatively cheap solution to scan the shelf and give you an audit.

Adapta founders

Adapta founders, from left: Cristian Dobre, Mihai Craciunescu, and Diana Baicu. Source: Adapta Robotics

Know thy customers

Were you developing these systems for specific customers, or did you already have broader applications in mind? Did Adapta Robotics develop them with multiple customers at once, or did you start with one customer and then branch out?

Craciunescu: It was mixed. As an engineer, you can design all kinds of robots. Our approach is to try to solve problems that are specific to a certain industry.

Having the client tell us what they need is the most valuable feedback. We could think of different solutions in our lab, but when you are doing that in real life, you can quickly see what things to focus on.

It’s very important to have a client in the loop when you design something. Ideally, you should have more than one, but as we got started, we needed at least one to can use, let’s say, the first version of the robots. It can see what doesn’t work, and then you can improve on that. After you have a prototype, then you can look around in the market.

Baicu: When we’re getting this information from clients and designing a product, we try to make something more general that can be easily customized afterwards. You don’t want it customized from the beginning because that limits other possibilities, even for the development for that customer.

But then you need first customers that are patient, right? They have to be willing to work with you and understand that not everything’s going to work right away.

Craciunescu: That would be the ideal setup. Some clients were really pushy, and it was up to us to deliver. They can understand that something doesn’t work, but we had to fix it as soon as possible. We had quite a bit of pressure.

Adapta's MATT delta robot for product testing.

The MATT delta robot for product testing. Source: Adapta Robotics

Co-founders share lessons learned

The robotics development process is rarely a straight path. What are some of the lessons that you learned or surprises along the way?

Baicu: We know this from when we were building competition robots, but it became even more clear [with Adapta Robotics]. It’s about the design choices overall. … You need to have the right components and architectures from the beginning, at least with durability and scalability in mind, because otherwise, it will be very complicated to modify everything rather than just thinking about it from the beginning.

Of course, there’s a balance between how far you can go in these choices. Very expensive components or very complicated architectures take more time to implement and more money.

When you’re sourcing components, whether it’s sensors or actuators, do you have preferred partners? How did you identify what would work best, given your and your customers’ priorities?

Craciunescu: It’s an iterative process. Let’s take ERIS as an example. Initially, we made an educated guess about the best-in-class cameras we needed.

When we actually connected them to the computer, we saw that they were on USB 3.0. We had the right cameras, but the communication protocol made the processor waste a lot of time converting the serial information to actual pictures and data metrics you could use.

We wanted that processor to run other things, so the next step was to find some cameras on another protocol. We then looked at different distributors and so on.

Another aspect we did not have experience with was, for example, cameras to measure the distances. Our approach was to buy depth cameras from all the major manufacturers and test them internally. We had a couple of criteria — we knew we wanted to look at shelves that were up to 1 m in depth and knew the distances from the robot to the shelves.

We also looked at the company maturity, or if they could provide the cameras 10 years from now. If we’re happy with all these smaller decisions, we’ll pull the trigger.

If you know from prior experience what’s the right solution for you, that’s fine, but most of the time, you need tests to validate the right path. This makes R&D quite expensive, but sometimes, you don’t have the luxury to buy all the solutions out there.

The ERIS inventory-scanning system works with human associates.

The ERIS inventory-scanning system works with human associates. Source: Adapta Robotics

When to focus on integration and simulation

On the software side, Adapta Robotics’ customers may use different systems. How much work does integration involve?

Baicu: It’s a significant part of what we do. There’s a focus from the beginning on our side to create the software infrastructure and the intelligence as well, meaning the computer vision and algorithms, or machine learning and AI. It’s a process that needs to be supported.

First of all, there are the updates to improve or fix bugs, and at the same time, we maintain the algorithmic part with new data sets, examples, or retraining if needed.

How much do you rely on simulation for training and deployments?

Craciunescu: We try not to rely too much on simulation. We do some — for example, for mechanical stress testing. But we don’t go into the details like kinematics. We do what makes sense from an engineering point of view, as we want to build an actual product.

You can focus a lot on simulation, and that can be a trap because you can make the most beautiful simulations in the world and not have a product.

Baicu: At the same time, you can transpose simulations into real-life situations. The simulation is an idealized environment, so you have to introduce noise or variations, but it will never be the real world. Sometimes, it can be very complicated if you put too much effort on the simulation side.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Adapta gives a glimpse of its roadmap

What is Adapta Robotics planning for this year?

Craciunescu: MATT is designed to be flexible and for multiple use cases. This year, we’re looking at specialized versions of MATT for different industries, like refurbishment, automotive, and medical.

It’s currently used in those industries with add-ons, but there are not models specifically designed for each industry, which could help the selling process.

Are you focusing more on productizing or re-engineering your technology? For instance, MATT’s software suite now works with a six degree-of-freedom robot arm.

Baicu: A bit of both. We now have clients and know their needs. Maybe we just present it differently or add some features that make the product easier to set up and use. Sometimes, it’s about creating new add-ons or complementary solutions that can respond to the needs in that field of activity.

Craciunescu: With ERIS, we already have a client that’s mostly in the logistics space and requires things like detecting of barcodes. It’s similar to retail but a different application. We’re exploring ways of reusing parts of the hardware and software that we’ve developed.

Are you in the midst of fundraising? Are you looking to expand to new markets internationally?

Craciunescu: Yes. We’re in the process of raising capital and are in a due diligence phase. We’re currently at 10 highly skilled professionals, but capital would allow us to be more aggressive in markets such as automotive.

Currently, 50% of our clients are in the U.S., and the rest are Western Europe. We have a couple of clients in India and Brazil as well.

At R-24, we discussed the Danish robotics scene. What is the industry like in Romania?

Craciunescu: Denmark is an outlier, and it’s doing very well. The European market in general doesn’t encourage R&D, which is very cash-intensive. If you look across Europe, robotics requires funding from the EU and from each individual state.

There are other robotics companies in Romania, and we have a lot of talent locally. We sometimes find out about one another at events outside of Romania.

Baicu: Romania is quite well-developed on the IT or software development side. It’s fairly complicated to have a discussion for what the needs are for a company that does hardware.

With the rise of AI, we need to have a deeper consideration for what we are putting our efforts into. We’re now seeing a bit of a shift, and are seeing a better attitude toward manufacturing and hardware.

Craciunescu: The brain drain really affects us. As a young student willing to learn about robotics, I had no mentors. That’s a problem for the medical industry as well and society as a whole.

Right now, we’re trying at Adapta to provide a space for new students to come and learn from professionals. We had the option of going abroad but decided to build something locally. Being part of the EU, we can basically scale up anywhere we want.

The post Adapta Robotics execs explain development strategies for testing and inventory robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/adapta-robotics-execs-explain-development-strategies-for-testing-and-inventory-robots/feed/ 0
NVIDIA highlights Omniverse, Isaac adoption by robot market leaders https://www.therobotreport.com/nvidia-highlights-omniverse-isaac-adoption-by-market-leaders/ https://www.therobotreport.com/nvidia-highlights-omniverse-isaac-adoption-by-market-leaders/#comments Mon, 03 Jun 2024 00:30:24 +0000 https://www.therobotreport.com/?p=579267 CEO Jenson Huang announced that robotic factories can accelerate industrial digitization with NVIDIA AI and Omniverse.

The post NVIDIA highlights Omniverse, Isaac adoption by robot market leaders appeared first on The Robot Report.

]]>
medical image on left, robot sim on right side.

The NVIDIA Isaac platform powers electronics, healthcare and industrial applications. | Credit: eCential Robotics (left), Amazon Robotics (right)

In addition to artificial intelligence products, NVIDIA Corp. founder and CEO Jensen Huang announced several robotics-related items during his keynote today at COMPUTEX in Taiwan. The company said that many computer manufacturers are producing a new generation of “AI computers” using its chips to enable Omniverse for modeling and business workflows.

Back in April, NVIDIA announced several new robotics-related technologies at the 2024 GPU Technology Conference (GTC). These new products included Project GR00T, Jetson Thor, Isaac Lab, OSMO, Isaac Manipulator, and Isaac Perceptor.

NVIDIA Isaac Perceptor is a new reference workflow for autonomous mobile robots (AMRs) and automated guided vehicles (AGVs). Isaac Manipulator offers new foundation models and a reference workflow for industrial robotic arms.

The company has also updated Jetson for Robotics in NVIDIA JetPack 6.0. It has included NVIDIA Isaac Lab, a lightweight app for robot learning, in NVIDIA Isaac Sim 4.0.

The Santa Clara, Calif.-based company was also a 2024 RBR50 award winner for its Nova Carter reference AMR platform developed with Segway Robotics.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Manufacturers can simulate in Omniverse

While there weren’t any new robotics product announcements, NVIDIA did say that many of its partners are beginning to use AI and Isaac SIM in the design of new manufacturing facilities. Through the creation of a digital twin of the factory floor, these companies are now able to simulate the assembly process by programming and running robots in simulation.

“Everything is going to be robotic. All of the factories will be robotic. The factories will orchestrate robots,” said Huang, reported Digitimes Asia. “And those robots will be building products that are robotic robots interacting with robots, building robotic products.”

He showed video clips that robots are all moving by themselves in the Omniverse, where simulations were implemented through digital twins.

“Generative physical AI can learn skills using reinforcement learning from physics feedback in a simulated world,” Huang said during his keynote. “In these simulation environments, robots learn to make decisions by performing actions in a virtual world that obeys the laws of physics.”

Taiwanese chip manufacturer and NVIDIA partner Foxconn is using Isaac SIM to plan factories that will produce the next generation of NVIDIA processors.

NVIDIA also announced that electronics manufacturers Delta Electronics, Pegatron, and Wistron are using NVIDIA Metropolis, Omniverse, and Isaac to simulate, build, and operate their facilities with “virtual factories.”

simulated image of a Foxconn factory with Fanuc cobots on the manufacturing line.

Foxconn’s factory simulated in Omniverse, featuring AI robots developed by NVIDIA robotics partners. | Credit: Foxconn

Top robot developers use Isaac robotics platform

NVIDIA claimed that top robot developers are using the Isaac robotics platform to create AI-enabled autonomous devices and robots. They included more than a dozen world leaders in the robotics industry, including BYD Electronics, Siemens, Teradyne Robotics, and Intrinsic.

These users are adding NVIDIA Isaac-accelerated libraries, physically-based simulation, and AI models to their software frameworks and robot models. This can make factories, warehouses, and distribution centers more efficient and safer for people who work there, said NVIDIA. It added that the robots can help people with repetitive or very precise tasks.

“The era of robotics has arrived. Everything that moves will one day be autonomous,” said Huang. “We are working to accelerate generative physical AI by advancing the NVIDIA robotics stack, including Omniverse for simulation applications, Project GR00T humanoid foundation models, and the Jetson Thor robotics computer.”

Siemens, a worldwide leader in industrial automation, uses NVIDIA Isaac Sim for its software-in-the-loop capabilities. The company said Isaac technologies speed its development and testing of new robotics skills like SIMATIC Robot PickAI (PRO) and SIMATIC Robot Pack AI.

According to Siemens, the industrial robots can now independently and successfully pick and pack arbitrary goods without human training by using cognitive AI vision software.

“AI-powered robots will accelerate the digital transformation of industry and take over repetitive tasks that were previously impossible to automate so we can unlock human potential for more creative and valuable work,” said Roland Busch, president and CEO at Siemens AG.

Siemens said it also brings vision AI to robots from KUKA, Techman Robot, Universal Robots, and Yaskawa by seamlessly integrating with automation solutions and making it easy to use on an NVIDIA-powered Siemens industrial PC foundation.

Foxconn virtual factory digital twin in six different panels.

Foxconn virtual factory digital twin built using AI, NVIDIA Omniverse, NVIDIA Isaac and NVIDIA Metropolis. | Credit: Foxconn

Intrinsic using Isaac Manipulator to simulate robot gripping

Alphabet software and AI robotics subsidiary Intrinsic, which purchased Open Source Robotics Corporation in late 2022, tested Isaac Manipulator on its robot-agnostic software platform. With Manipulator, Intrinsic showed that a scalable, universal robotic-grasping talent can function across grippers, settings, and objects.

Solomon, Techman Robot, Vention and Yaskawa are among the companies using Isaac Manipulator for building AI-based robotic arms. With partners ADLINK, Advantech, and ONYX, NVIDIA said AI Enterprise on the IGX platform offers edge AI systems meeting strict regulatory standards, essential for medical technology and other industries.

“We couldn’t have found a better collaborator in NVIDIA, who are helping to pave the way for foundation models to have a profound impact on industrial robotics,” stated Wendy Tan White, CEO of Intrinsic. “As our teams work together on integrating NVIDIA Isaac and Intrinsic’s platform, the potential value we can unlock for millions of developers and businesses is immense.”

Over 100 companies are adopting NVIDIA Isaac Sim to simulate, test and validate robotics applications, including Hexagon, Husqvarna Group, and MathWorks. Humanoid robot developers Agility Robotics, Boston Dynamics, Figure AI, Fourier Intelligence, and Sanctuary AI are adopting Isaac Lab.

In addition, NVIDIA noted that robotics developers such as Moon Surgical and the SETI Institute are using NVIDIA Holoscan on the updated IGX Orin platform for sensor processing and deploying AI and high-performance computing for flexible sensor integration and real-time insights.

The post NVIDIA highlights Omniverse, Isaac adoption by robot market leaders appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-highlights-omniverse-isaac-adoption-by-market-leaders/feed/ 2