Microprocessors / SoCs Archives - The Robot Report https://www.therobotreport.com/category/technologies/microprocessors-socs/ Robotics news, research and analysis Wed, 05 Jun 2024 23:51:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Microprocessors / SoCs Archives - The Robot Report https://www.therobotreport.com/category/technologies/microprocessors-socs/ 32 32 NVIDIA, Foxconn to build advanced computing center in Taiwan https://www.therobotreport.com/nvidia-foxconn-to-build-advanced-computing-center-in-taiwan/ https://www.therobotreport.com/nvidia-foxconn-to-build-advanced-computing-center-in-taiwan/#respond Thu, 06 Jun 2024 12:30:49 +0000 https://www.therobotreport.com/?p=579316 The Foxconn computing center will be anchored by NVIDIA's GB200 super chip servers and enable electric vehicle and smart city development.

The post NVIDIA, Foxconn to build advanced computing center in Taiwan appeared first on The Robot Report.

]]>
(From left to right) NVIDIA president and CEO Jensen Huang, and Foxconn chairman and CEO Young Liu shaking hands. | Source: Foxconn.

NVIDIA CEO Jensen Huang and Foxconn CEO Young Liu celebrate their cooperation. | Source: Foxconn

Hon Hai Technology Group, better known as Foxconn, this week said they plan to jointly build an advanced computing center in Kaohsiung, Taiwan. At the core of the center will be the NVIDIA Blackwell platform. The companies made the announcement at Computex 2024. 

NVIDIA said the cutting-edge computing center will be anchored by the super chip GB200 servers and consist of a total of 64 racks and 4,608 GPUs. The electronics manufacturer will contribute its production scale and said it expects to complete the center by 2026. 

The companies said their latest collaboration demonstrates their commitment to building servers to drive artificial intelligence, electric vehicles (EVs), smart factories, smart cities, robotics, and more. 

“A new era of computing has dawned, fueled by surging global demand for generative AI data centers,” stated Jensen Huang, founder and CEO of NVIDIA. “Foxconn stands at the forefront as a leading supplier of NVIDIA computing and a trailblazer in the application of generative AI in manufacturing and robotics.”

“Leveraging NVIDIA Omniverse and Isaac robotics platforms, Foxconn is harnessing cutting-edge AI and digital twin technologies to construct their advanced computing center in Kaohsiung,” he added.

Cooperation continues for Foxconn with new superchip 

This isn’t the first time Foxconn and NVIDIA have collaborated. The company has worked closely with NVIDIA on various product development projects. NVIDIA said Foxconn has excellent vertical integration capabilities and is a vital partner for the new GB200 Grace Blackwell Superchip. 

The superchip connects two NVIDIA B200 Tensor Core GPUs to the NVIDIA Grace CPU over a 900GB/s ultra-low-power NVLink chip-to-chip interconnect. The company said that for the highest AI performance, GB200-powered systems can be connected with the NVIDIA Quantum-X800 InfiniBand and Spectrum-X800 Ethernet platforms.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


NVIDIA and Foxconn plan for the facility

The partners said that NVIDIA’s AI technology will drive Foxconn’s three smart platforms: Smart Manufacturing, Smart EV, and Smart City. The new facility will use NVIDIA Omniverse to create digital twins for these platforms.

Foxconn plans to use image-recognition technology combined with its autonomous mobile robots (AMRs) to provide optimal capacity utilization in smart manufacturing. The companies said they will also take on production-line planning, which will encompass the existing manufacturing of AI servers and electric vehicle assembly plants. 

Foxconn subsidiary Foxtron’s Qiaotou automotive manufacturing facility will be one of Foxconn’s benchmark AI factories. Currently under construction, the site will use digital twins connected to cloud technologies. The company also hopes to collaborate between virtual and physical production lines.

In addition, the facility is set up with digital real-time monitoring to ensure high-quality manufacturing of an electric bus. 

NVIDIA and Foxconn plan to collaborate on future electric vehicle models designed by Foxconn. Currently, the company is negotiating projects with traditional European and American automakers. The partners also said they plan to develop a “cabin-driving-in-one” smart travel system. 

The post NVIDIA, Foxconn to build advanced computing center in Taiwan appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-foxconn-to-build-advanced-computing-center-in-taiwan/feed/ 0
Foresight to collaborate with KONEC on autonomous vehicle concept https://www.therobotreport.com/foresight-collaborates-with-konec-autonomous-vehicle-concept/ https://www.therobotreport.com/foresight-collaborates-with-konec-autonomous-vehicle-concept/#respond Mon, 03 Jun 2024 12:30:02 +0000 https://www.therobotreport.com/?p=579242 Foresight will integrate its ScaleCam 3D perception technology with KONEC into a conceptual autonomous driving vehicle. 

The post Foresight to collaborate with KONEC on autonomous vehicle concept appeared first on The Robot Report.

]]>
Two Foresight branded cameras on top of a white car.

Foresight says its ScaleCam system can generate high-quality depth maps. | Source: Foresight

Foresight Autonomous Holdings Ltd. last week announced that it has signed a co-development agreement with KONEC Co., a Korean Tier 1 automotive supplier. Under the agreement, the companies will integrate Foresight’s ScaleCam 3D perception technology into a concept autonomous vehicle. 

The collaboration is sponsored by the Foundation of Korea Automotive Parts Industry Promotion (KAP), founded by Hyundai Motor Group. The partners said they will combine KONEC’s expertise in developing advanced automotive systems with KAP’s mission to foster innovation within the automobile parts industry. 

“We believe that the collaboration with KONEC represents a significant step forward in the development of next-generation autonomous driving solutions,” stated Haim Siboni, CEO of Foresight. “By combining our resources, image-processing expertise, and innovative technologies, we aim to accelerate the development and deployment of autonomous vehicles, ultimately contributing to safer transportation solutions in the Republic of Korea.” 

Foresight is an innovator in automotive vision systems. The Ness Ziona, Israel-based company is developing smart multi-spectral vision software systems and cellular-based applications. Through its subsidiaries, Foresight Automotive Ltd., Foresight Changzhou Automotive Ltd., and Eye-Net Mobile Ltd., it develops both in-line-of-sight vision systems and beyond-line-of-sight accident-prevention systems. 

KONEC has established a batch production system for lightweight metal raw materials, models, castings, processing, and assembly through cooperation among its group affiliates. The Seosan-si, South Korea-based company‘s major customers include Tesla, Hyundai Motor, and Kia.

KONEC has entered the field of information processing technology using cameras to perform tasks such as developing a license-plate recognition system with companies that have commercialized systems on chips (SoCs) and modules for Internet of Things (IoT) communication. 


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Foresight ScaleCam to enhance autonomous capabilities 

The collaboration will incorporate Foresight’s ScaleCam 360º 3D perception technology. The company said it will enable the self-driving vehicle to accurately perceive its surroundings. It and KONEC said say the successful integration of ScaleCam could significantly enhance the capabilities and safety of autonomous vehicles. 

ScaleCam is based on stereoscopic technology. The system uses advanced and proven image-processing algorithms, according to Foresight. The company claimed that it provides seamless vision by using two visible-light cameras for highly accurate and reliable obstacle-detection capabilities. 

Typical stereoscopic vision systems require constant calibration to ensure accurate distance measurements, Foresight noted. To solve this, some developers mount stereo cameras on a fixed beam, but this can limit camera placement positions and lead to technical issues, it said.

Foresight asserted that its technology allows for the independent placement of both visible-light and thermal infrared camera modules. This allows the system to support large baselines without mechanical constraints, providing greater distance accuracy at long ranges, it said. 

The post Foresight to collaborate with KONEC on autonomous vehicle concept appeared first on The Robot Report.

]]>
https://www.therobotreport.com/foresight-collaborates-with-konec-autonomous-vehicle-concept/feed/ 0
Lumotive and Hokuyo release 3D lidar sensor with solid-state beam steering https://www.therobotreport.com/lumotive-and-hokuyo-release-3d-lidar-sensor-with-solid-state-beam-steering/ https://www.therobotreport.com/lumotive-and-hokuyo-release-3d-lidar-sensor-with-solid-state-beam-steering/#respond Wed, 22 May 2024 13:06:38 +0000 https://www.therobotreport.com/?p=579148 The new sensor from Lumotive uses the latest beamforming technology for industrial automation and service robotics.

The post Lumotive and Hokuyo release 3D lidar sensor with solid-state beam steering appeared first on The Robot Report.

]]>
Lumotive and Hoyuko's new YLM-10LX 3D lidar sensor uses patented LCM optical beamforming for robotics applications. Source: Lumotive

Hokuyo’s YLM-10LX 3D uses Lumotive’s patented LCM optical beamforming for robotics applications. Source: Lumotive

Perception technology continues to evolve for autonomous systems, becoming more robust and compact. Lumotive and Hokuyo Automatic Co. today announced the commercial release of the YLM-10LX 3D lidar sensor, which they claimed “represents a major leap forward in applying solid-state, programmable optics to transform 3D sensing.”

The product uses Lumotive’s Light Control Metasurface (LCM) optical beamforming technology and is designed for industrial automation and service robotics applications.

“We are thrilled to see our LM10 chip at the heart of Hokuyo’s new YLM-10LX sensor, the first of our customers’ products to begin deploying our revolutionary beam-steering technology into the market,” stated Dr. Axel Fuchs, vice president of business development at Lumotive.

“This product launch highlights the immense potential of our programmable optics in industrial robotics and beyond,” he added. “Together with Hokuyo, we look forward to continuing to redefine what’s possible in 3D sensing.”

Lumotive LCM offers stable lidar perception

Lumotive said its award-winning optical semiconductors enable advanced sensing in next-generation consumer, mobility, and industrial automation products such as mobile devices, autonomous vehicles, and robots. The Redmond, Wash.-based company said its patented LLCM chips “deliver an unparalleled combination of high performance, exceptional reliability, and low cost — all in a tiny, easily integrated solution.”

The LCM technology uses dynamic metasurfaces to manipulate and direct light “in previously unachievable ways,” said Lumotive. This eliminates the need for the bulky, expensive, and fragile mechanical moving parts found in traditional lidar systems, it asserted.

“As a true solid-state beam-steering component for lidar, LCM chips enable unparalleled stability and accuracy in 3D object recognition and distance measurement,” said the company. “[The technology] effectively handles multi-path interference, which is crucial for industrial environments where consistent performance and safety are paramount.”

Lumotive said the LM10 LCM allows sensor makers such as Hokuyo to rapidly integrate compact, adaptive programmable optics into their products. It manufactures the LM10 like its other products, following well-established and scalable silicon fabrication techniques. The company said this cuts costs through economies of scale, making solid-state lidar economically feasible for widespread adoption in a broad spectrum of industries.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Software-defined sensing provides flexibility, says Hokuyo

Hokuyo claimed that the new sensor “is the first of its kind in the lidar industry, achieving superior range and field of view (FOV) compared to any other solid-state solution on the market by integrating beam-steering with Lumotive’s LM10 chip.”

In addition, the software-defined scanning capabilities of LCM beam steering allow users to adjust performance parameters such as the sensor’s resolution, detection range, and frame rate, said the Osaka, Japan-based company. They can program and use multiple FOVs simultaneously, adapting to application needs and changing conditions, indoors and outdoors.

Hokuyo said the commercial release of the YLM-10LX sensor marks another milestone in its continued investment in its long-term, strategic collaboration with Lumotive.

“With the industrial sectors increasingly demanding high-performance, reliable lidar systems that also have the flexibility to address multiple applications, our continued partnership with Lumotive allows us to harness the incredible potential of LCM beam steering and to deliver innovative solutions that meet the evolving needs of our customers,” said Chiai Tabata, product and marketing lead at Hokuyo.

Founded in 1946, Hokuyo Automatic offers a range of industrial sensor products for the factory automation, logistics automation, and process automation industries. The company‘s products include collision-avoidance sensors, safety laser scanner and obstacle-detection sensors, optical data transmission devices, laser rangefinders (lidar), and hot-metal detectors. It also provides product distribution and support services.

The post Lumotive and Hokuyo release 3D lidar sensor with solid-state beam steering appeared first on The Robot Report.

]]>
https://www.therobotreport.com/lumotive-and-hokuyo-release-3d-lidar-sensor-with-solid-state-beam-steering/feed/ 0
March 2024 robotics investments total $642M https://www.therobotreport.com/march-2024-robotics-investments-total-642m/ https://www.therobotreport.com/march-2024-robotics-investments-total-642m/#respond Thu, 18 Apr 2024 14:14:18 +0000 https://www.therobotreport.com/?p=578749 March 2024 robotics funding was buoyed by significant investment into software and drone suppliers.

The post March 2024 robotics investments total $642M appeared first on The Robot Report.

]]>
March 2024 robotics investments fell from the prior month.

Chinese and U.S. companies led March 2024 robotics investments. Credit: Eacon Mining, Dan Kara

Thirty-seven robotics firms received funding in March 2024, pulling in a total monthly investment of $642 million. March’s investment figure was significantly less than February’s mark of approximately $2 billion, but it was in keeping with other monthly investments in 2023 and early 2024 (see Figure 1, below).

March2024 investments dropped from the previous month.

California companies secure investment

As described in Table 1 below, the two largest robotics investments in March were secured by software suppliers. Applied Intuition, a provider of software infrastructure to deploy autonomous vehicles at scale, received a $250 million Series E round, while Physical Intelligence, a developer of foundation models and other software for robots and actuated devices, attracted $70 million in a seed round. Both firms are located in California.

Other California firms receiving substantial rounds included Bear Robotics, a manufacturer of self-driving indoor robots that raised a $60 million Series C round, and unmanned aerial system (UAS) developer Firestorm, whose seed funding was $20 million. For a PDF version of Table 1, click here.

March 2024 robotics investments

CompanyAmount ($)RoundCountryTechnology
Agilis Robotics10,000,000Series AChinaSurgical/interventional systems
AloftEstimateOtherU.S.Drones, data acquisition / processing / management
Applied Intuition250,000,000Series EU.S.Software
Automated Architecture3,280,000EstimateU.K.Micro-factories
Bear RoboBear Roboticstics60,000,000Series CU.S.Indoor mobile platforms
BIOBOT Surgical18,000,000Series BSingaporeSurgical systems
Buzz Solutions5,000,000OtherU.S.Drone inspection
Cambrian Robotics3,500,000SeedU.K.Machine vision
Coctrl13,891,783Series BChinaSoftware
DRONAMICS10,861,702GrantU.K.Drones
Eacon Mining41,804,272Series CChinaAutonomous transportation, sensors
ECEON RoboticsEstimatePre-seedGermanyAutonomous forklifts
ESTAT AutomationEstimateGrantU.S.Actuators / motors / servos
Fieldwork Robotics758,181GrantU.K.Outdoor mobile manipulation platforms, sensors
Firestorm Labs20,519,500SeedU.S.Drones
Freespace RoboticsEstimateOtherU.S.Automated storage and retrieval systems
Gather AI17,000,000Series AU.S.Drones, software
Glacier7,700,000OtherU.S.Articulated robots, sensors
IVY TECH Ltd.421,435GrantU.K.Outdoor mobile platforms
KAIKAKUEstimatePre-seedU.K.Collaborative robots
KEF RoboticsEstimateGrantU.S.Drone software
Langyu RobotEstimateOtherChinaAutomated guided vehicles, software
Linkwiz2,679,725OtherJapanSoftware
MotionalEstimateSeedU.S.Autonomous transportation systems
Orchard Robotics3,800,000Pre-seedU.S.Crop management
Pattern Labs8,499,994OtherU.S.Indoor and outdoor mobile platforms
Physical Intelligence70,000,000SeedU.S.Software
PiximoEstimateGrantU.S.Indoor mobile platforms
Preneu11,314,492Series BKoreaDrones
QibiTech5,333,884OtherJapanSoftware, operator services, uncrewed ground vehicles
Rapyuta RoboticsEstimateOtherJapanIndoor mobile platforms, autonomous forklifts
RIOS Intelligent Machines13,000,000Series BU.S.Machine vision
RITS13,901,825Series AChinaSensors, software
Robovision42,000,000OtherBelgiumComputer vision, AI
Ruoyu Technology6,945,312SeedChinaSoftware
Sanctuary Cognitive SystemsEstimateOtherCanadaHumanoids / bipeds, software
SeaTrac Systems899,955OtherU.S.Uncrewed surface vessels
TechMagic16,726,008Series CJapanArticulated robots, sensors
Thor PowerEstimateSeedChinaArticulated robots
Viam45,000,000Series BGermanySmart machines
WIRobotics9,659,374Series AS. KoreaExoskeletons, consumer, home healthcare
X SquareEstimateSeedU.S.Software
YindatongEstimateSeedChinaSurgical / interventional systems
Zhicheng PowerEstimateSeries AChinaConsumer / household
Zhongke HuilingEstimateSeedChinaHumanoids / bipeds, microcontrollers / microprocessors / SoC

Drones get fuel for takeoff in March 2024

Providers of drones, drone technologies, and drone services also attracted substantial individual investments in March 2024. Examples included Firestorm and Gather AI, a developer of inventory monitoring drones whose Series A was $17 million.

In addition, drone services provider Preneu obtained $11 million in Series B funding, and DRONAMICS, a developer of drone technology for cargo transportation and logistics operations, got a grant worth $10.8 million.

Companies in U.S. and China received the majority of the March 2024 funding, at $451 million and $100 million, respectively (see Figure 2, below).

Companies based in Japan and the U.K. were also well represented among the March 2024 investment totals. Four companies in Japan secured a total of $34.7 million, while an equal number of firms in the U.K. attracted $13.5 million in funding.

 

March 2024 robotics investment by country.

Nearly 40% of March’s robotics investments came from a single Series E round — that of Applied Intuition. The remaining funding classes were all represented in March 2024 (Figure 3, below).

March 2024 robotics funding by type and amounts.

Editor’s notes

What defines robotics investments? The answer to this simple question is central in any attempt to quantify them with some degree of rigor. To make investment analyses consistent, repeatable, and valuable, it is critical to wring out as much subjectivity as possible during the evaluation process. This begins with a definition of terms and a description of assumptions.

Investors and investing

Investment should come from venture capital firms, corporate investment groups, angel investors, and other sources. Friends-and-family investments, government/non-governmental agency grants, and crowd-sourced funding are excluded.

Robotics and intelligent systems companies

Robotics companies must generate or expect to generate revenue from the production of robotics products (that sense, analyze, and act in the physical world), hardware or software subsystems and enabling technologies for robots, or services supporting robotics devices. For this analysis, autonomous vehicles (including technologies that support autonomous driving) and drones are considered robots, while 3D printers, CNC systems, and various types of “hard” automation are not.

Companies that are “robotic” in name only, or use the term “robot” to describe products and services that do not enable or support devices acting in the physical world, are excluded. For example, this includes “software robots” and robotic process automation. Many firms have multiple locations in different countries. Company locations given in the analysis are based on the publicly listed headquarters in legal documents, press releases, etc.

Verification

Funding information is collected from several public and private sources. These include press releases from corporations and investment groups, corporate briefings, market research firms, and association and industry publications. In addition, information comes from sessions at conferences and seminars, as well as during private interviews with industry representatives, investors, and others. Unverifiable investments are excluded and estimates are made where investment amounts are not provided or are unclear.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


The post March 2024 robotics investments total $642M appeared first on The Robot Report.

]]>
https://www.therobotreport.com/march-2024-robotics-investments-total-642m/feed/ 0
BlackBerry and AMD partner to reduce latency in robotics https://www.therobotreport.com/blackberry-amd-partner-reduce-latency-in-robotics/ https://www.therobotreport.com/blackberry-amd-partner-reduce-latency-in-robotics/#respond Thu, 11 Apr 2024 20:57:05 +0000 https://www.therobotreport.com/?p=578674 BlackBerry and Advanced Micro Devices said they plan to address the need for 'hard' real-time capabilities in robotics-focused hardware.

The post BlackBerry and AMD partner to reduce latency in robotics appeared first on The Robot Report.

]]>
AMD's Kria K26 SOM will work with the BlackBerry QNX SDP.

AMD’s Kria K26 SOM will power the hardware with the BlackBerry QNX SDP. | Source: AMD

BlackBerry Ltd. announced at Embedded World this week that it is collaborating with Advanced Micro Devices Inc. The partners said they want to enable next-generation robotics by reducing latency and jitter and with “repeatable determinism.”

The companies said they will jointly “address the critical need for ‘hard’ real-time capabilities in robotics-focused hardware.” BlackBerry and AMD plan to release an affordable system-on-module (SOM) platform that delivers enhanced performance, reliability, and scalability for robotic systems in industrial healthcare

This platform will combine BlackBerry’s QNX expertise in real-time foundational software and the QNX Software Development Platform (SDP) with heterogeneous hardware powered by the AMD Kria K26 SOM. It features both Arm and FPGA (field programmable gate array) logic-based architecture.

“With the QNX Software Development Platform, customers can start development quickly on the AMD Kria KR260 Starter Kit and seamlessly scale to other higher-performance AMD platforms as their needs evolve,” stated Chetan Khona, senior director of industrial, vision, healthcare, and sciences markets at AMD.

“Combining the industry-leading strengths of AMD and QNX will provide a foundation platform that opens new doors for innovation and takes the future of robotics technology well beyond the constraints experienced until now,” he said.

BlackBerry, AMD provide capabilities with less latency

With Kria, an Arm sub-system can power the advanced capabilities of the QNX microkernel real-time operation system (RTOS), said Advanced Micro Devices and BlackBerry. It can do this while allowing users to run low-latency, deterministic functions on the programmable logic of the AMD Kria KR260 robotics starter kit. 

This combination enables sensor fusion, high-performance data processing, real-time control, industrial networking, and reduced latency in robotics applications, said the companies.

They added that customers can benefit from integration and optimization of software and hardware components. This results in streamlined development processes and accelerated time to market for robotics innovations, said AMD and BlackBerry. 

“An integrated solution by BlackBerry QNX through our collaboration with AMD will provide an integrated software-hardware foundation offering real-time performance, low latency, and determinism to ensure that critical robotic tasks are executed with the same level of precision and responsiveness every single time,” said Grant Courville, vice president of product and strategy at BlackBerry QNX.

“These are crucial attributes for industries carrying out finely tuned operations, such as the fast-growing industries of autonomous mobile robots and surgical robotics” he added. “Together with AMD, we are committed to driving technological advancements that address some of these most complex challenges and transform the future of the robotics industry.”

The integrated system is now available to customers.

See AMD at Robotics Summit & Expo

For more than 50 years, Advanced Micro Devices has been a leading innovator in high-performance computing (HPC), graphics, and visualization technologies. The Santa Clara, Calif.-based company noted that billions of people, Fortune 500 businesses, and scientific research institutions worldwide rely on its technology daily.

AMD recently released the Embedded+ HPC architecture, the Spartan UltraScale+ FPGA family, and Versal Gen 2 for AI and edge processing.

Kosta Sidopoulos, a product engineer at AMD, will be speaking at the Robotics Summit & Expo, which takes place May 1 and 2 at the Boston Convention and Exhibition Center. His talk on “Enabling Next-Gen AI Robotics” will delve into the unique features and capabilities of AMD’s AI-enabled products. It will highlight their adaptability and scalability for diverse robotics applications.

Registration is now open for the Robotics Summit & Expo, which will feature more than 70 speakers, 200 exhibitors, and up to 5,000 attendees, as well as numerous networking opportunities.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


The post BlackBerry and AMD partner to reduce latency in robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/blackberry-amd-partner-reduce-latency-in-robotics/feed/ 0
AMD releases Versal Gen 2 to improve support for embedded AI, edge processing https://www.therobotreport.com/amd-releases-versal-gen-2-to-support-ai-edge-processing/ https://www.therobotreport.com/amd-releases-versal-gen-2-to-support-ai-edge-processing/#respond Tue, 09 Apr 2024 08:15:20 +0000 https://www.therobotreport.com/?p=578606 The first devices in AMD Versal Series 2 target high-efficiency for AI Engines, and Subaru is one of its first customers.

The post AMD releases Versal Gen 2 to improve support for embedded AI, edge processing appeared first on The Robot Report.

]]>
AMD Versal AI Edge and Prime Gen 2.

The AMD Versal AI Edge and Prime Gen 2 are next-gen SoCs. Source: Advanced Micro Devices

To enable more artificial intelligence on edge devices such as robots, hardware vendors are adding to their processor portfolios. Advanced Micro Devices Inc. today announced the expansion of its adaptive system on chip, or SoC, line with the new AMD Versal AI Edge Series Gen 2 and Versal Prime Series Gen 2.

“The demand for AI-enabled embedded applications is exploding and driving the need for solutions that bring together multiple compute engines on a single chip for the most efficient end-to-end acceleration within the power and area constraints of embedded systems,” stated Salil Raje, senior vice president and general of the Adaptive and Embedded Computing Group at AMD.

“Based on over 40 years of adaptive computing leadership in high-security, high-reliability, long-lifecycle, and safety-critical applications, these latest-generation Versal devices offer high compute efficiency and performance on a single architecture that scales from the low end to high end,” he added.

For more than 50 years, AMD said it has been a leading innovator in high-performance computing (HPC), graphics, and visualization technologies. The Santa Clara, Calif.-based company noted that billions of people, Fortune 500 businesses, and scientific research institutions worldwide rely on its technology daily.

Versal Gen 2 addresses three phases of accelerated AI

Advanced Micro Devices said the Gen 2 systems put preprocessing, AI inference, and postprocessing on a single device to deliver accelerated AI. This provides the optimal mix for accelerated AI meet the complex processing needs of real-world embedded systems, it asserted.

  • Preprocessing: The new systems include FPGA (field-programmable gate array) logic fabric for real-time preprocessing; flexible connections to a wide range of sensors; and implementation of high-throughput, low-latency data-processing pipelines.
  • AI inference: AMD said it provides an array of vector processes in the form of next-generation AI Engines for efficient inference.
  • Postprocessing: Arm CPU cores provide the power needed for complex decision-making and control for safety-critical applications, said AMD.

“This single-chip intelligence can eliminate the need to build multi-chip processing solutions, resulting in smaller, more efficient embedded AI systems with the potential for shorter time to market,” the company said.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


AMD builds to maximize power and compute

AMD said its latest systems offer up to 10x more scalar compute compared with the first generation, so the devices can more efficiently handle sensor processing and complex scalar workloads. The Versal Prime Gen 2 devices include new hard IP for high-throughput video processing, including up to 8K multi-channel worflows.

This makes the scalable portfolio suitable for applications such as ultra-high-definition (UHD) video streaming and recording, industrial PCs, and flight computers, according to the company.

In addition, the new SoCs include new AI Engines that AMD claimed will deliver three times the TOPS (trillions of operations per second) per watt than the first-generation Versal AI Edge Series devices.

“Balancing performance, power, [and] area, together with advanced functional safety and security, Versal Series Gen 2 devices deliver new capabilities and features,” said AMD. It added that they “enable the design of high-performance, edge-optimized products for the automotive, aerospace and defense, industrial, vision, healthcare, broadcast, and pro AV [autonomous vehicle] markets.”

“Single-chip intelligence for embedded systems will enable pervasive AI, including robotics … smart city, cloud and AI, and the digital home,” said Manuel Uhm, director for Versal marketing at AMD, in a press briefing. “All will need to be accelerated.”

The Versal Prime Gen 2 SoC.

The Versal Prime Gen 2 is designed for high-throughput applications such as video processing. Source: AMD

Versal powers Subaru’s ADAS vision system

Subaru Corp. is using AMD’s adaptive SoC technology in current vehicles equipped with its EyeSight advanced driver-assistance system (ADAS). EyeSight is integrated into certain car models to enable advanced safety features including adaptive cruise control, lane-keep assist, and pre-collision braking.

“Subaru has selected Versal AI Edge Series Gen 2 to deliver the next generation of automotive AI performance and safety for future EyeSight-equipped vehicles,” said Satoshi Katahira. He is general manager of the Advanced Integration System Department and ADAS Development Department, Engineering Division, at Subaru.

“Versal AI Edge Gen 2 devices are designed to provide the AI inference performance, ultra-low latency, and functional safety capabilities required to put cutting-edge AI-based safety features in the hands of drivers,” he added.

Vivado and Vitis part of developer toolkits

AMD said its Vivado Design Suite tools and libraries can help boost productivity and streamline hardware design cycles, offering fast compile times and enhanced-quality results. The company said the Vitis Unified Software Platform “enables embedded software, signal processing, and AI design development at users’ preferred levels of abstraction, with no FPGA experience needed.”

Earlier this year, AMD released the Embedded+ architecture for accelerated edge AI, as well as the Spartan UltraScale+ FPGA family for edge processing.

Early-access documentation for Versal Series Gen 2 is now available, along with first-generation Versal evaluation kits and design tools. AMD said it expects Gen 2 silicon samples to be available in the first half of 2025, followed by evaluation kits and system-on-modules samples in mid-2025, and production silicon in late 2025.

The post AMD releases Versal Gen 2 to improve support for embedded AI, edge processing appeared first on The Robot Report.

]]>
https://www.therobotreport.com/amd-releases-versal-gen-2-to-support-ai-edge-processing/feed/ 0
Top 10 robotics news stories of March 2024 https://www.therobotreport.com/top-10-robotic-stories-of-march-2024/ https://www.therobotreport.com/top-10-robotic-stories-of-march-2024/#respond Mon, 01 Apr 2024 17:01:03 +0000 https://www.therobotreport.com/?p=578366 From events like MODEX and GTC to new product launches, there was no shortage of robotics news to cover in March 2024. 

The post Top 10 robotics news stories of March 2024 appeared first on The Robot Report.

]]>
March 2024 was a non-stop month for the robotics industry. From events such as MODEX and GTC to exciting new deployments and product launches, there was no shortage of news to cover. 

Here are the top 10 most popular stories on The Robot Report this past month. Subscribe to The Robot Report Newsletter or listen to The Robot Report Podcast to stay updated on the latest technology developments.


10. Robotics Engineering Career Fair to connect candidates, employers at Robotics Summit

The career fair will draw from the general robotics and artificial intelligence community, as well as from attendees at the Robotics Summit & Expo. Past co-located career fairs have drawn more than 800 candidates, and MassRobotics said it expects even more people at the Boston Convention and Exhibition Center this year. Read More


SMC released LEHR series grippers for UR cobot arms in March 2024.

9. SMC adds grippers for cobots from Universal Robots

SMC recently introduced a series of electric grippers designed to be used with collaborative robot arms from Universal Robots. Available in basic and longitudinal types, SMC said the LEHR series can be adapted to different industrial environments like narrow spaces. Read More


anyware robotics pixmo robot.8. Anyware Robotics announces new add-on for Pixmo unloading robots

Anyware Robotics announced in March 2024 an add-on for its Pixmo robot for truck and container unloading. The patent-pending accessory includes a vertical lift with a conveyor belt that is attached to Pixmo between the robot and the boxes to be unloaded. Read More


image of Phoenix humanoid robot, full body, not a render.

7. Accenture invests in humanoid maker Sanctuary AI in March 2024

In its Technology Vision 2024 report, Accenture said 95% of the executives it surveyed agreed that “making technology more human will massively expand the opportunities of every industry.” Well, Accenture put its money where its mouth is. Accenture Ventures announced a strategic investment in Sanctuary AI, one of the companies developing humanoid robots. Read More


Cambrian Robotics is applying machine vision to industrial robots

6. Cambrian Robotics obtains seed funding to provide vision for complex tasks

Machine vision startup Cambrian Robotics Ltd. has raised $3.5 million in seed+ funding. The company said it plans to use the investment to continue developing its AI platform to enable robot arms “to surpass human capabilities in complex vision-based tasks across a variety of industries.” Read More


Mobile Industrial Robots introduced the MiR1200 pallet jack in March 2024.5. Mobile Industrial Robots launches MiR1200 autonomous pallet jack

Autonomous mobile robots (AMRs) are among the systems benefitting from the latest advances in AI. Mobile Industrial Robots at LogiMAT in March 2024 launched the MiR1200 Pallet Jack, which it said uses 3D vision and AI to identify pallets for pickup and delivery “with unprecedented precision.” Read More


4. Reshape Automation aims to reduce barriers of robotics adoption

Companies in North America bought 31,159 robots in 2023. That’s a 30% decrease from 2022. And that’s not sitting well with robotics industry veteran Juan Aparicio. After working at Siemens for a decade and stops at Ready Robotics and Rapid Robotics, Aparicio hopes his new startup Reshape Automation can chip away at this problem. Read More


Apptronik Apollo moves a tote.

3. Mercedes-Benz testing Apollo humanoid

Apptronik announced that leading automotive brand Mercedes-Benz is testing its Apollo humanoid robot. As part of the agreement, Apptronik and Mercedes-Benz will collaborate on identifying applications for Apollo in automotive settings. Read More


NVIDIA CEO Jenson Huang on stage with a humanoid lineup in March 2024.

2. NVIDIA announces new robotics products at GTC 2024

The NVIDIA GTC 2024 keynote kicked off like a rock concert in San Jose, Calif. More than 15,000 attendees filled the SAP Arena in anticipation of CEO Jensen Huang’s annual presentation of the latest product news from NVIDIA. He discussed the new Blackwell platform, improvements in simulation and AI, and all the humanoid robot developers using the company’s technology. Read More


Schneider cobot product family.

1. Schneider Electric unveils new Lexium cobots at MODEX 2024

In Atlanta, Schneider Electric announced the release of two new collaborative robots: the Lexium RL 3 and RL 12, as well as the Lexium RL 18 model coming later this year. From single-axis machines to high-performance, multi-axis cobots, the Lexium line enables high-speed motion and control of up to 130 axes from one processor, said the company. It added that this enables precise positioning to help solve manufacturer production, flexibility, and sustainability challenges. Read More

 

The post Top 10 robotics news stories of March 2024 appeared first on The Robot Report.

]]>
https://www.therobotreport.com/top-10-robotic-stories-of-march-2024/feed/ 0
Delta Electronics demonstrates digital twin, power systems at GTC https://www.therobotreport.com/delta-electronics-demonstrates-digital-twin-power-systems-at-gtc/ https://www.therobotreport.com/delta-electronics-demonstrates-digital-twin-power-systems-at-gtc/#respond Thu, 28 Mar 2024 20:25:24 +0000 https://www.therobotreport.com/?p=578308 Delta Electronics has developed digital twins with NVIDIA for designing and managing industrial automation and AI data centers.

The post Delta Electronics demonstrates digital twin, power systems at GTC appeared first on The Robot Report.

]]>
Delta Electronics at NVIDIA GTC 2024.

Delta exhibited its data center and other technologies at NVIDIA GTC 2024. Source: Delta Electronics

SAN JOSE, Calif. — Artificial intelligence and robotics both devour power, but simulation, next-generation processors, and good product design can mitigate the draw. At NVIDIA Corp.’s GTC event last week, Delta Electronics Inc. demonstrated how its digital twin platform, developed on NVIDIA Omniverse, can help enhance smart manufacturing capabilities.

“We’ve partnered with NVIDIA on energy-efficient designs to support AI,” Franziskus Gehle, general manager of the Power Solutions business unit at Delta, told The Robot Report. “We’ve co-developed 5.5 kW designs for 98% efficiency.”

The Taipei, Taiwan-based company explained how its technologies can benefit industrial automation and warehouse operations. Delta also showed its ORV3 AI server infrastructure product and DC converters and other technologies designed to support graphics processing unit (GPU) operations.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Delta designs simulation to manage automation

Founded in 1971, Delta Electronics said it is a global leader in switching power supplies and thermal management products. The company’s portfolio includes systems for industrial automation, building automation, telecommunications power, data center infrastructure, electric vehicle charging, renewable energy, and energy storage and display.

Delta added that its energy-efficient products can support sustainable development. The company has sales offices, research and development centers, and factories at nearly 200 locations around the world. It provides articulated robot arms, SCARA robots, and robot controllers with integrated servo drives.

“Since 1995, Delta has supplied automation components, and it now offers a full product line,” said Claire Ou, senior principal for strategic marketing in the Power and System business group at Delta. “We’ve used NVIDIA simulation for our customers and ourselves, for machine tools and semiconductors.”

“Because Delta has a lot of factories around the world, it’s best to do test runs to fine-tune our hardware and software before implementation,” she told The Robot Report. “Our solutions can monitor and manage warehouses and factories for maximum productivity.”

In addition, Delta has developed its own standalone simulation software in addition to NVIDIA Omniverse, and it can integrate data from both. In the past, automation designers, manufacturers, and users worked with different tools, but customers are now optimistic about easier collaboration, said Ou.

“In 2012, Industry 4.0 was about digitalizing manufacturing,” she noted. “Since then, our management and monitoring systems have been integrated into global factories. We’re also working with data for construction and smart buildings.”

NVIDIA partners for digital twins to manage power

“We are honored to be the only power and thermal management solutions provider at NVIDIA GTC 2024, where we will showcase the NVIDIA Omniverse-powered digital twin we have developed, which underscores our superior expertise in next-generation electronics manufacturing,” stated Mark Ko, vice chairman of Delta Electronics. “We look forward to helping transcend the boundaries of energy efficiency in the AI realm using the latest technologies.”

Delta has deployed its power management technology to leading cloud solution providers (CSPs) and AI developers such as Meta (parent of Facebook), Microsoft, and Amazon Web Services, noted Gehle.

“Our customers have doubled their power requirements in the past six months rather than in years,” he said. “All of their road maps anticipate a significant increase in power demand, so they need management in place for next-generation GPUs and power-hungry generative AI.”

“We used digital twins and Omniverse to design and pre-qualify our products worldwide,” Gehle explained. “It’s important that our data center plans are aligned with those of our customers.”

At GTC, Delta presented an integrated Open Rack Version 3 (ORV3) system for AI server infrastructure with server power supplies boasting energy efficiency as high as 97.5%. It also included SD-WAN, Common Redundant Power Supply Units (CRPS) with 54Vdc output, ORV3 18kW/33kW HPR Power Shelves, a Battery Backup Unit (BBU), a Mini UPS, and a liquid cooling system.

In addition, the company showed its portfolio of DC/DC converters, power chokes, and 3D Vapor Chambers for GPU operations.

“The new era of AI-powered manufacturing is marked by digital twins and synthetic data, which can enhance efficiency and productivity before actual production begins,” said Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA, in a release.

“By developing its digital platform on NVIDIA Omniverse, Delta can virtually link specific production lines and aggregate data from a diverse range of equipment and systems to create a digital twin of its operations,” he said. “And with NVIDIA Isaac Sim, it can generate synthetic data to train its computer models to achieve 90% accuracy.”

The post Delta Electronics demonstrates digital twin, power systems at GTC appeared first on The Robot Report.

]]>
https://www.therobotreport.com/delta-electronics-demonstrates-digital-twin-power-systems-at-gtc/feed/ 0
NVIDIA announces new robotics products at GTC 2024 https://www.therobotreport.com/nvidia-announces-new-robotics-products-at-gtc-2024/ https://www.therobotreport.com/nvidia-announces-new-robotics-products-at-gtc-2024/#respond Tue, 19 Mar 2024 11:02:34 +0000 https://www.therobotreport.com/?p=578193 NVIDIA CEO Jenson Huang wowed the crowd in San Jose with the company's latest processor, AI, and simulation product announcements.

The post NVIDIA announces new robotics products at GTC 2024 appeared first on The Robot Report.

]]>
NVIDIA CEO Jenson Huang on stage with a humanoid lineup.

NVIDIA CEO Jenson Huang ended his GTC 2024 keynote backed by life size images of all of the various humanoids in development and powered by the Jetson Orin computer. | Credit: Eugene Demaitre

SAN JOSE, Calif. — The NVIDIA GTC 2024 keynote kicked off like a rock concert yesterday at the SAP Arena. More than 15,000 attendees filled the arena in anticipation of CEO Jensen Huang’s annual presentation of the latest product news from NVIDIA.

To build the excitement, the waiting crowd was mesmerized by an interactive and real-time generative art display running live on the main stage screen, driven by the prompts of artist Refik Anadol Dustio.

New foundation for humanoid robotics

The big news from the robotics side of the house is that NVIDIA launched a new general-purpose foundation model for humanoid robots called Project GR00T. This new model is designed to bring robotics and embodied AI together while enabling the robots to understand natural language and emulate movements by observing human actions.

GR00T training model diagram.

Project GR00T training model. | Credit: NVIDIA

GR00T stands for “Generalist Robot 00 Technology,” and with the race for humanoid robotics heating up, this new technology is intended to help accelerate development. GR00T is a large multimodal model (LMM) providing robotics developers with a generative AI platform to begin the implementation of large language models (LLMs).

“Building foundation models for general humanoid robots is one of the most exciting problems to solve in AI today,” said Huang. “The enabling technologies are coming together for leading roboticists around the world to take giant leaps towards artificial general robotics.”

GR00T uses the new Jetson Thor

As part of its robotics announcements, NVIDIA unveiled Jetson Thor for humanoid robots, based on the NVIDIA Thor system-on-a-chip (SoC). Significant upgrades to the NVIDIA Isaac robotics platform include generative AI foundation models and tools for simulation and AI workflow infrastructure.

The Thor SoC includes a next-generation GPU based on NVIDIA Blackwell architecture with a transformer engine delivering 800 teraflops of 8-bit floating-point AI performance. With an integrated functional safety processor, a high-performance CPU cluster, and 100GB of Ethernet bandwidth, it can simplify design and integration efforts, claimed the company.

Image of a humanoid robot.

Project GR00T, a general-purpose multimodal foundation model for humanoids, enables robots to learn different skills. | Credit: NVIDIA

NVIDIA showed humanoids in development with its technologies from companies including 1X Technologies, Agility Robotics, Apptronik, Boston Dynamics, Figure AI, Fourier Intelligence, Sanctuary AI, Unitree Robotics, and XPENG Robotics.

“We are at an inflection point in history, with human-centric robots like Digit poised to change labor forever,” said Jonathan Hurst, co-founder and chief robot officer at Agility Robotics. “Modern AI will accelerate development, paving the way for robots like Digit to help people in all aspects of daily life.”

“We’re excited to partner with NVIDIA to invest in the computing, simulation tools, machine learning environments, and other necessary infrastructure to enable the dream of robots being a part of daily life,” he said.

NVIDIA updates Isaac simulation platform

The Isaac tools that GR00T uses are capable of creating new foundation models for any robot embodiment in any environment, according to NVIDIA. Among these tools are Isaac Lab for reinforcement learning, and OSMO, a compute orchestration service.

Embodied AI models require massive amounts of real and synthetic data. The new Isaac Lab is a GPU-accelerated, lightweight, performance-optimized application built on Isaac Sim for running thousands of parallel simulations for robot learning.

simulation screen shots.

NVIDIA software — Omniverse, Metropolis, Isaac and cuOpt — combine to create an ‘AI gym’
where robots, AI agents can work out and be evaluated in complex industrial spaces. | Credit: NVIDIA

To scale robot development workloads across heterogeneous compute, OSMO coordinates the data generation, model training, and software/hardware-in-the-loop workflows across distributed environments.

NVIDIA also announced Isaac Manipulator and Isaac Perceptor — a collection of robotics-pretrained models, libraries and reference hardware.

Isaac Manipulator offers dexterity and modular AI capabilities for robotic arms, with a robust collection of foundation models and GPU-accelerated libraries. It can accelerate path planning by up to 80x, and zero-shot perception increases efficiency and throughput, enabling developers to automate a greater number of new robotic tasks, said NVIDIA.

Among early ecosystem partners are Franka Robotics, PickNik Robotics, READY Robotics, Solomon, Universal Robots, a Teradyne company, and Yaskawa.

Isaac Perceptor provides multi-camera, 3D surround-vision capabilities, which are increasingly being used in autonomous mobile robots (AMRs) adopted in manufacturing and fulfillment operations to improve efficiency and worker safety. NVIDIA listed companies such as ArcBest, BYD, and KION Group as partners.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


‘Simulation first’ is the new mantra for NVIDIA

A simulation-first approach is ushering in the next phase of automation. Real-time AI is now a reality in manufacturing, factory logistics, and robotics. These environments are complex, often involving hundreds or thousands of moving parts. Until now, it was a monumental task to simulate all of these moving parts.

NVIDIA has combined software such as Omniverse, Metropolis, Isaac, and cuOpt to create an “AI gym” where robots and AI agents can work out and be evaluated in complex industrial spaces.

Huang demonstrated a digital twin of a 100,000-sq.-ft, warehouse — built using the NVIDIA Omniverse platform for developing and connecting OpenUSD applications — operating as a simulation environment for dozens of digital workers and multiple AMRs, vision AI agents, and sensors.

Each mobile robot, running the NVIDIA Isaac Perceptor multi-sensor stack, can process visual information from six sensors, all simulated in the digital twin.

robots working together in a warehouse.

Image depicting AMR and a manipulator working together to
enable AI-based automation in a warehouse powered by NVIDIA Isaac. | Credit: NVIDIA

At the same time, the NVIDIA Metropolis platform for vision AI can create a single centralized map of worker activity across the entire warehouse, fusing data from 100 simulated ceiling-mounted camera streams with multi-camera tracking. This centralized occupancy map can help inform optimal AMR routes calculated by the NVIDIA cuOpt engine for solving complex routing problems.

cuOpt, an optimization AI microservice, solves complex routing problems with multiple constraints using GPU-accelerated evolutionary algorithms.

All of this happens in real-time, while Isaac Mission Control coordinates the entire fleet using map data and route graphs from cuOpt to send and execute AMR commands.

NVIDIA DRIVE Thor for robot axis

The company also announced NVIDIA DRIVE Thor, which now supersedes NVIDIA DRIVE Orin as a SoC for autonomous driving applications.

Multiple autonomous vehicles are using NVIDA architectures, including robotaxis and autonomous delivery vehicles from companies including Nuro, Xpeng, Weride, Plus, and BYD.

The post NVIDIA announces new robotics products at GTC 2024 appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-announces-new-robotics-products-at-gtc-2024/feed/ 0
AMD unveils Spartan UltraScale+ FPGA family for edge processing https://www.therobotreport.com/amd-unveils-spartan-ultrascale-fpga-family-for-edge-processing/ https://www.therobotreport.com/amd-unveils-spartan-ultrascale-fpga-family-for-edge-processing/#respond Wed, 06 Mar 2024 13:49:37 +0000 https://www.therobotreport.com/?p=578071 AMD said the latest addition to its portfolio of FPGAs and adaptive SoCs delivers cost and power-efficient performance. 

The post AMD unveils Spartan UltraScale+ FPGA family for edge processing appeared first on The Robot Report.

]]>
An aerial view of the AMD Spartan UltraScale+ FPGA.

The Spartan UltraScale+ FPGA is designed to provide cost and energy-efficient compute. | Source: AMD

As robots and sensors proliferate, the need for robust compute has increased. Advanced Micro Devices Inc. yesterday announced its AMD Spartan UltraScale+ FPGA family. The company said the latest addition to its portfolio of field-programmable gate arrays, or FPGAs, and adaptive systems on chips, or SoCs, delivers cost and power-efficient performance for a wide range of I/O-intensive applications at the edge.

“For over 25 years, the Spartan FPGA family has helped power some of humanity’s finest achievements, from lifesaving automated defibrillators to the CERN particle accelerator advancing the boundaries of human knowledge,” stated Kirk Saban, corporate vice president of the Adaptive and Embedded Computing Group at AMD.

“Building on proven 16-nm technology, the Spartan UltraScale+ family’s enhanced security and features, common design tools, and long product lifecycles further strengthen our market-leading FPGA portfolio and underscore our commitment to delivering cost-optimized products for customers,” he added.

AMD claimed that its Spartan UltraScale+ devices offer a high I/O to logic cell ratio in FPGAs with built-in 28 nm and lower process technology. The Santa Clara, Calif.-based company said they consume as much as 30% less total power than its previous generation. The FPGAs also include the most robust set of security features in the cost-optimized portfolio, it asserted. 

AMD optimizes Spartan UltraScale+ for the edge

The high I/O counts and flexible interfaces of the new Spartan UltraScale+ FPGAs enable them to efficiently interface with multiple devices or systems, said AMD. The company said this will help address “the explosion of sensors and connected devices” such as robots. 

“Spartan UltraScale+ is primarily targeted for robot actuators, joint control, and camera sensors,” Rob Bauer, senior manager of cost-optimized silicon marketing at AMD, told The Robot Report“IoT [Internet of Things] devices are growing 2.3X from 2022 to 2028, according to the FPGA Market Global Forecast. There’s a need for supply chain stability and longevity.”

“The high programmable I/O count enables interfacing with a very wide range of sensors, and that in combination with programmable logic allows sensor processing and control in a low-latency, deterministic, and real-time manner,” he explained. “Programmable I/O is made up of a combination of 3.3V HDIO, HPIO, and the new high-performance XP5IO capable of supporting 3.2G MIPI D-PHY.”

The FPGAs offer up to 572 I/Os and voltage support up to 3.3V. It enables any-to-any connectivity for edge connectivity for edge sensing and control applications.

AMD said its devices feature the “proven” 16nm fabric and support for a wide array of packaging, starting as small as 10x10mm. These provide high I/O density in an compact footprint. 

In addition, the company said its portfolio provides the scalability to start with cost-optimized FPGAs and continue through to midrange and high-end products. It estimated that the Spartan UltraScale+ reduces power consumption by 30% in comparison with its 28 nm Artix 7 family by using 16 nm FinFET technology and hardened connectivity. 

“Generational power improvement is up to 30%. This is already significant, as there could be multiple such devices used in a robot today that can be upgraded with lower-power, newer-generation devices,” Bauer said. “Additionally, as these devices are then expected to enable the nervous system of the robot by interfacing and putting out data between the sensors and the controller, which can now be done at a better overall power efficiency up to 60%.”

These devices are the first AMD UltraScale+ FPGAs with a hardened LPDDR5 memory controller and PCIe Gen4 x8 support, providing both power efficiency and future-ready capabilities for customers, said AMD. 


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Spartan UltraScale+ includes several security features

AMD said its new devices’ security features include:

  • IP protection: Support for post-quantum cryptography (PQC) with NIST-approved algorithms offers state-of-the-art IP protection against evolving cyberattacks and threats. A physical unclonable function provides each device with a unique fingerprint for added security.
  • Tampering prevention: PPK/SPK key support helps manage obsolete or compromised security keys, while differential power analysis helps protect against side-channel attacks. The devices contain a permanent tamper penalty to further protect against misuse.
  • Uptime maximization: Enhanced single-event upset performance helps fast and secure configuration with increased reliability for customers, said AMD.

“We have many features in addition to PQC to enable secure authentication in post-quantum age,” Bauer said. “Spartan UltraScale+ devices are able to meet many of the requirements listed in IEC 62443, as it offers a long list of security features such as PUF, hardware root of trust, true random-number generator, AES-GCM-256, eFUSE, soft error mitigation, security monitor, DPA counter measures, temperature and voltage monitoring, tamper logging, JTAG monitoring, and more.”

Robotics and generative AI are contributing to chipset demand, according to Omdia, which estimated that the global market for dedicated SoCs could reach $866 million by 2028.

AMD said its entire portfolio of FPGAs and adaptive SoCs is supported by the AMD Vivado Design Suite and Vitis Unified Software Platform. This allows hardware and software designers to use “a single design cockpit from design to verification” to maximize the productivity benefits of these tools, it said.

The Spartan UltraScale+ FPGA sampling and evaluation kits will be available in the first half of 2025, according to AMD. Documentation is available now, and tools support started with AMD Vivado Design Suite in the fourth quarter of 2024.

The post AMD unveils Spartan UltraScale+ FPGA family for edge processing appeared first on The Robot Report.

]]>
https://www.therobotreport.com/amd-unveils-spartan-ultrascale-fpga-family-for-edge-processing/feed/ 0
Locus Lock promises to protect autonomous systems from GPS spoofing https://www.therobotreport.com/locus-lock-promises-protect-autonomous-systems-gps-spoofing/ https://www.therobotreport.com/locus-lock-promises-protect-autonomous-systems-gps-spoofing/#respond Mon, 26 Feb 2024 15:21:58 +0000 https://www.therobotreport.com/?p=577991 Locus Lock has developed software-defined radio to overcome GPS spoofing for more secure autonomous navigation.

The post Locus Lock promises to protect autonomous systems from GPS spoofing appeared first on The Robot Report.

]]>
Locus Lock is designing RF systems to provide navigational security.

Locus Lock is designing RF systems to provide navigational security. Source: Locus Lock

Flying back from Miami last week, I put my life in the hands of two strangers, just because they wore gold epaulets. These commercial pilots, in turn, trusted their onboard computers to safely navigate the passengers home. The computers accessed satellite data from the Global Positioning System to set the course.

This chain of command is very fragile. The International Air Transport Association (IATA) reported last month an increased level of GPS spoofing and signal jamming since the outbreak of the wars in Ukraine and Israel. This poses the threat of catastrophe to aviators everywhere.

For example, last September, OPS Group reported that a European flight en route to Dubai almost entered into Iranian airspace without clearance. In 2020, Iran shot down an uncleared passenger aircraft that entered its territory. This has made the major airlines, avionics manufacturers, and NATO militaries and governments scramble to find solutions.

Navigational problems can be risky for commercial aircraft. Source: OPS Group

Navigational errors can be very dangerous for commercial aircraft. Source: OPS Group

Locus Lock founder came out of drone R&D

At ff Venture Capital, we recognize that GPS spoofing and jamming are fundamental problems for aerial, terrestrial, and marine autonomous systems in moving the industry forward. This investment thesis is grounded on a simple belief that the deployment of cost-effective uncrewed systems requires the trust of human operators who can’t afford to question the data.

When machines go awry, so does the industry. Just ask Cruise! This conviction led us to invest in Locus Lock. The company said it is taking an innovative software approach to GNSS signal processing using radio frequency, at a fraction of the cost of comparable hardware sold by military contractors.

Last week, I sat down with Locus Lock founder Hailey Nichols, a former University of Texas researcher in the school’s Radionavigation Laboratory. UT’s Lab is best known for its work with SpaceX and Starlink.

Nichols explained her transition from academic to founder: “I was always enthralled with the idea of aerospace and studied at MIT, where I was obsessed with the control and robotic side of aerospace. After I graduated, I worked at Aurora Flight Sciences, which is a subsidiary of Boeing, and I was a UAV software engineer.”

At Aurora, Nichols focused on integrating suites of sensors such as lidar, GPS, radar, and computer vision for uncrewed aerial vehicles (UAVs). However, she quickly became frustrated with the costs and quality of the sensors.

“They were physically heavy [and] power-intensive, and it made it quite hard for engineers to integrate,” she recalled. “This problem frustrated me so much that I went back to grad school to study it further, and I joined a lab down at the University of Texas.”

In Austin, the roboticist saw a different approach to sensor data, using software for signal processing.

“The radio navigation lab was very highly specialized in signal processing, specifically bringing in advanced software algorithms and robust estimation techniques forward to sensor technology,” explained Nichols. “This enabled more precise, secure, and reliable data, like positioning, navigation, and timing.”

Her epiphany came when she saw the market demand for the lab’s GNSS receiver from the U.S. Department of Defense and commercial partners after Locus Lock published research on autonomous vehicles accurately navigating urban canyons.

Navigating urban canyons is a challenge for conventional satellite-based systems.

Navigating urban canyons is a challenge for conventional satellite-based systems. Source: Quora

Reliable navigation needed for dual-use applications

Today, Locus Lock is ready to market its product more widely for dual-use applications across the spectrum of autonomy for commercial and defense use cases.

“Current GPS receivers often fail in what’s called ‘urban multipath,'” said Nichols. “This is where there’s building interference and shrouding of the sky can cause positioning errors. This can be problematic for autonomous cars, drones, and outdoor robotics that need access to centimeter-level positioning to make safe and informed decisions about where they are on the road or in the sky.”

The RF engineer continued: “Our other applicable industry is defense tech. With the rise of the Ukraine conflict and the Israel conflict in the Middle East, we’ve seen a massive amount of deliberate interference. So bad actors that are either spoofing or jamming, causing major outages or disruptions in GPS positioning.”

Locus Lock addresses this problem by enabling its GPS processing suite as a software solution, and unlike hardware, it’s affordable and extremely flexible.

“The ability to be backward-compatible and future-proof where we can constantly update and evolve our GPS processing suite to evolving attack vectors ensures that our customers are given the most cutting-edge and up-to-date processing techniques to enable centimeter-level positioning globally,” added Nichols.

“So our GNSS receivers are software-defined radio [SDR] with a specialized variant of inertially aided RTK [real-time kinematics],” she said, claiming that it provides a differentiator from competing products. “What that means is we’re doing some advanced sensor-fusion techniques with GNSS signals in addition to inertial navigation to ensure that, even in these pockets of urban canyons where you may not have access to GNSS signals … the GPS receiver [will] still provide centimeter-level positioning.”

As Nichols boasted, Locus Lock is an enabler of “next generation autonomous mobility.”

Locus Lock looks to affordable centimeter-level accuracy

While traditional GPS components cost around $40,000, Locus Lock said its its proprietary software and a 2-in. board cost around $2,000. Today, centimeter accuracy is inaccessible to most robot companies because most suppliers of robust hardware are military contractors, including L3Harris Technologies, BAE Systems, Northrop Grumman, and Elbit Systems.

“We’ve specifically made sure to cater our solution towards more low-cost environments that can proliferate mass-market autonomy and robotics into the ecosystem,” stated Nichols.

Locus Lock puts its software on a 2-in. board.

Locus Lock puts its software on a 2-in. board. Source: Oliver Mitchell

Nichols added that Locus Lock’s GNSS receiver is able to pull in data from global and regional satellite constellations.

“[This gives] us more access to any signals in the sky at any given time,” said the startup founder. “Diversity is also increasingly important in next-generation GPS receivers because it allows the device to evade jammed or afflicted channels.”

Grand View Research estimated that the SDR market will climb to nearly $50 billion by 2030. As uncrewed systems proliferate, Locus Lock’s price point should also come down, asserted Nichols.

“And while there are some companies that have progressed their autonomy stacks to be quite high, they haven’t gotten their prices down to make sense in a mass-market scenario,” she said. “And so it’s crucial to enable this next generation of autonomous mobility at large to not compromise on performance but to be able to provide this at an affordable price. Locus Lock is providing high-end performance at a much lower price point.”

Nichols even predicted that the company could eventually get product to under $1,000, if not less, with more adoption.

Global software defined radio market, research by Grand View Research

Source: Grand View Research

Tesla Optimus takes steps toward more mobile systems

Yesterday, Tesla published on X the latest video of its Optimus humanoid moving fluidly at an incredible gait for a robot. Pitchbook recently predicted that this could be a breakout period for humanoids, with 84 leading companies now having raised over $4.6 billion.

At the same time, the prospect of such advanced machines being hijacked via GPS spoofing into the service of terrorists, cybercriminals, or hostile governments is very real and horrifying. Thankfully, Nichols and her team are working with the Army Futures Command.

“A lot of this work has been done in spoofing and jamming — not only detection, but also mitigation,” she said. “We detect the type of RF environment that we are operating in to mitigate it and inform that end user with the situational awareness that is needed to assess ongoing attacks.”

“In addition, we can iterate much faster and bring in world-class experts on security and encryption to ensure that we protect secure military signals as much as possible,” said Nichols. “Our software can find assured reception that is demanded by these increasingly expensive and important assets that the military needs to protect.”

In ffVC’s view, our newest portfolio company is mission-critical to operating drones, robots, and other autonomous vessels safely, affordably, and securely in an increasingly dangerous world.

The post Locus Lock promises to protect autonomous systems from GPS spoofing appeared first on The Robot Report.

]]>
https://www.therobotreport.com/locus-lock-promises-protect-autonomous-systems-gps-spoofing/feed/ 0
AMD announces Embedded+ architecture to accelerate edge AI https://www.therobotreport.com/amd-announces-embedded-architecture-to-accelerate-edge-ai/ https://www.therobotreport.com/amd-announces-embedded-architecture-to-accelerate-edge-ai/#respond Tue, 06 Feb 2024 14:00:30 +0000 https://www.therobotreport.com/?p=577788 AMD Embedded+ combines embedded processors with adaptive systems on chips to shorten edge AI time to market.

The post AMD announces Embedded+ architecture to accelerate edge AI appeared first on The Robot Report.

]]>
AMD's new Embedded+ architecture for high-performance compute.

The new AMD Embedded+ architecture for high-performance compute. Source: Advanced Micro Devices

Robots and other smart devices need to process sensor data with a minimum of delay. Advanced Micro Devices Inc. today launched AMD Embedded+, a new computing architecture that combines AMD Ryzen Embedded processors with Versal adaptive systems on chips, or SoCs. The single integrated board is scalable and power-efficient and can accelerate time to market for original design manufacturer, or ODM, partners, said the company.

“In automated systems, sensor data has diminishing value with time and must operate on the freshest information possible to enable the lowest-latency, deterministic response,” stated Chetan Khona, senior director of industrial, vision, healthcare, and sciences markets at AMD, in a release.

“In industrial and medical applications, many decisions need to happen in milliseconds,” he noted. “Embedded+ maximizes the value of partner and customer data with energy efficiency and performant computing that enables them to focus in turn on addressing their customer and market needs.”

For more than 50 years, AMD said it has innovated in high-performance computing, graphics, and visualization technologies. The Santa Clara, Calif.-based company claimed that Fortune 500 businesses, billions of people, and research institutions around the world rely on its technology daily.

In the two years since it acquired Xilinx, AMD said it has seen increasing demand for AI in industrial/manufacturing, medical/surgical, smart-city infrastructure, and automotive markets. Not only can Embedded+ support video codecs and AI inferencing, but the combination of Ryzen and Versal can enable real-time control of robot arms, Khona said.

“Diverse sensor data is relied upon more than ever before, across applications,” said Khona in a press briefing last week. “The question is how to get sensor data from autonomous systems into a PC if it isn’t on a USB or some consumer interface.”


SITE AD for the 2024 RoboBusiness registration now open.Register now.


AMD Embedded+ paves a path to sensor fusion 

“The market for bringing processing closer to the sensor is growing rapidly,” said Khona. The use cases for embedded AI are growing, with the machine vision market growing to $600 million and sensor data analysis to $1.4 billion by 2028, he explained.

“AMD makes the path to sensor fusion, AI inferencing, industrial networking, control, and visualization simpler with this architecture and ODM partner products,” Khona said. He described the single mother board as usable with multiple types of sensors, allowing for offloaded processing and situational awareness.

AMD said it has validated the Embedded+ integrated compute platform to help ODM customers reduce qualification and build times without needing to expend additional hardware or research and development resources. The architecture enables the use of a common software platform to develop designs with low power, small form factors, and long lifecycles for medical, industrial, and automotive applications, it said.

The company asserted that Embedded+ is the first architecture to combine AMD x86 compute with integrated graphics and programmable I/O hardware for critical AI-inferencing and sensor-fusion applications. “Adaptive computing excels in deterministic, low-latency processing, whereas AI Engines improve high performance-per-watt inferencing,” said AMD.

Ryzen Embedded processors, which contain high-performance Zen cores and Radeon graphics, also offer rendering and display options for an enhanced 4K multimedia experience. In addition, it includes a built-in video codec for 4K H.264/H.265 encode/decode.

The combination of low-latency processing and high performance-per-watt inferencing enables high performance for tasks such as integrating adaptive computing in real time with flexible I/O, AI Engines for inferencing, and AMD Radeon graphics, said AMD.

It added that the new system combines the best of each technology. Embedded+ enables 10GigE vision and CoaXpress connectivity to camera via SFP+, said AMD, and image pre-processing occurs at pixel clock rates. This is especially important for mobile robot navigation, said Khona.

Sapphire delivers first Embedded+ ODM system

Embedded+ also allows system designers to choose from an ecosystem of ODM board offerings based on the architecture, said AMD. They can use it to scale their product portfolios to deliver performance and power profiles best suited to customers’ target applications, it asserted.

Sapphire Technology has built the first ODM system with the Embedded+ architecture, the Sapphire Edge+ VPR-4616-MB, a low-power Mini-ITX form factor motherboard. It offers the full suite of capabilities in as low as 30W of power by using the Ryzen Embedded R2314 processor and Versal AI Edge VE2302 Adaptive SoC.

The Sapphire Edge+ VPR-4616-MB is also available in a full system, including memory, storage, power supply, and chassis. Versal is a programmable network on a chip that can be tuned for power or performance, said AMD. With Ryzen, it provides programmable logic for sensor fusion and real-time controls, it explained.

“By working with a compute architecture that is validated and reliable, we’re able to focus our resources to bolster other aspects of our products, shortening time to market and reducing R&D costs,” said Adrian Thompson, senior vice president of global marketing at Sapphire Technology. “Embedded+ is an excellent, streamlined platform for building solutions with leading performance and features.”

The Embedded+ qualified VPR-4616-MB from Sapphire Technology is now available for purchase.

The post AMD announces Embedded+ architecture to accelerate edge AI appeared first on The Robot Report.

]]>
https://www.therobotreport.com/amd-announces-embedded-architecture-to-accelerate-edge-ai/feed/ 0
Indy Autonomous Challenge announces new racecar and additional races https://www.therobotreport.com/indy-autonomous-challenge-announces-new-racecar-additional-races/ https://www.therobotreport.com/indy-autonomous-challenge-announces-new-racecar-additional-races/#respond Wed, 10 Jan 2024 19:48:28 +0000 https://www.therobotreport.com/?p=577401 The Indy Autonomous Challenge announced a completely new sensor and compute architecture for the AV24 racecar.

The post Indy Autonomous Challenge announces new racecar and additional races appeared first on The Robot Report.

]]>
The Indy Autonomous Challenge has revised the sensors and compute in the AV24 racecar.

The IAC has revised the sensors and compute in the AV24 racecar. Source: Indy Autonomous Challenge

The Indy Autonomous Challenge, or IAC, made two major announcements at CES 2024 this week. The first was that the IAC plans to present four autonomous racecar events in 2024, and the second was an updated technology stack.

The first event of the year is the IAC@CES, which takes place tomorrow at the Las Vegas Motor Speedway. The Robot Report will be in attendance to cover this event later this week.

More Indy Autonomous Challenge races to come

The IAC will also participate for the second year in a row at the Milano Monza Open-Air Motor Show from June 16 to 18 in Milan, Italy. Last year, the challenge debuted autonomous road racing with the IAC autonomous race cars for the first time.

Unlike other oval track-based races, the Milan Monza event challenges the university teams to develop their AI drivers for a road course. It is arguably one of the most famous road-racing venues in the world and exposes the IAC to a global racing audience, said event organizers.

The third event in 2024 will be from July 11 to 14 at the Goodwood Festival of Speed in the U.K. Described as “motorsport’s ultimate summer garden party,” the festival features the treacherous Goodwood hill climb.

This year, the IAC race cars will attempt the hill climb while setting new autonomous speed records. At last year’s event, the course was captured digitally, and the university teams are using that data to train their AI drivers.

Finally, IAC will return this year to the famous Indy Motor Speedway on Sept. 6, where it all started back in October 2021. The event expects to set new speed records and enable more university teams to qualify for head-to-head racing at the event.

Tech stack gets updates for the AV24

The other big news from IAC this week is the launch of a new generation of autonomous racecar, called the AV24. The original race platform, the AV21, has aged since its launch at the first race.

Winning university teams PoliMOVE and TUM have set multiple speed records over the past three years, pushing the AV21 to its sensor and computing limits. The platform has also suffered from maintenance and troubleshooting issues, especially in the fragility of the wiring harnesses. Some of the harness problems have plagued many of the teams as they prepared prior competitions.

In response, the IAC team went through the sensor, networking, and compute stack and re-engineered an entirely new platform that should enable the university teams to continue to push the limits of speed and control while testing and developing cutting-edge AI driver algorithms. AV24 does not include any changes in the race car chassis, the engine, or the physical dimensions of the vehicle.

Here’s a look at what’s new in the AV24 technology stack.

a bulleted list of the new AV24 race car sensors and specs.

The new IAC AV24 race car includes all new sensors and compute architecture. | Credit: IAC

Most notably, the AV24 now includes split braking controls that will allow the AV24 to manage braking on all four wheels of the vehicle separately, essentially giving the AI drivers more control of the vehicle than is humanly possible.

“The IAC event has succeeded beyond our wildest dreams,” said Paul Mitchell, co-founder and CEO of the Indy Autonomous Challenge. “We originally thought it would be a one-and-done challenge, but the event has thrived, so it was time to go back to the drawing board and deploy a new technology stack leveraging the best technology from our event partners.”

The post Indy Autonomous Challenge announces new racecar and additional races appeared first on The Robot Report.

]]>
https://www.therobotreport.com/indy-autonomous-challenge-announces-new-racecar-additional-races/feed/ 0
Eyeonic Vision System Mini unveiled by SiLC Technologies at CES 2024 https://www.therobotreport.com/silc-launches-eyeonic-mini-ces-2024/ https://www.therobotreport.com/silc-launches-eyeonic-mini-ces-2024/#respond Tue, 09 Jan 2024 15:00:13 +0000 https://www.therobotreport.com/?p=577373 SiLC says its new Eyeonic Mini AI machine vision system provides sub-millimeter resolution at a significantly reduced size.

The post Eyeonic Vision System Mini unveiled by SiLC Technologies at CES 2024 appeared first on The Robot Report.

]]>
Eyeonic Vision System Mini from SiLC Technologies

The Eyeonic Vision System Mini is designed to be compact and power-efficient. Source: SiLC Technologies

SiLC Technologies Inc. today at CES launched its Eyeonic Vision System Mini, which integrates a full, multi-channel frequency modulated continuous wave (FMCW) lidar on a single silicon photonic chip and an integrated FMCW lidar system on chip (SoC). The Eyeonic Mini “sets a new industry benchmark in precision,” said the Monrovia, Calif.-based company.

“Our FMCW lidar platform aims to enable a highly versatile and scalable platform to address the needs of many applications,” said Dr. Mehdi Asghari, CEO of SiLC Technologies, in a release.

“At CES this year, we’re demonstrating our long-range vision capabilities of over 2 km [1.2 mi.],” he added. “With the Eyeonic Mini, we’re showcasing our high precision at shorter distances. Our FMCW lidar solutions, at short or long distances, bring superior vision to machines to truly enable the next generation of AI based automation.”

Founded in 2018, SiLC Technologies said its 4D+ Eyeonic lidar chip integrates all the photonics functions needed to enable a coherent vision sensor. The company added that the system offers a small footprint and addresses the need for low power consumption and cost, making it suitable for robotics, autonomous vehicles, biometrics, security, and industrial automation.

In November 2023, SiLC raised $25 million to expand production of its Eyeonic Vision System.

Eyeonic Mini uses Surya SoC for precision

To be useful, robots need powerful, compact, and scalable vision that won’t be affected by complex or unpredictable environments or conditions, as well as interference from other systems, asserted SiLC Technologies. Sensors must also provide motion, velocity, polarization, and precision, capabilities that the company said make FMCW superior to existing time-of-flight (ToF) systems.

FMCW technology enables newer imaging systems to directly capture images for AI, factory robots, home security, autonomous vehicles, and perimeter security applications, said SiLC.

The Eyeonic Mini uses what it described as “the industry’s first purpose-built” digital lidar processor SoC, the iND83301 or “Surya” developed by indie Semiconductor. As a result, the company said, it can deliver “an order of magnitude greater precision than existing technologies while being one-third the size of last year’s pioneering model.”

“The Eyeonic Mini represents the next evolution of our close collaboration with SiLC. The combination of our two unique technologies has created an industry-leading solution in performance, size, cost, and power,” said Chet Babla, senior vice president for strategic marketing at indie Semiconductor. “This creates a strong foundation for our partnership to grow and address multiple markets, including industrial automation and automotive.”

With Surya, a four-channel FMCW lidar chip provides robots with sub-millimeter depth precision from distances exceeding 10 m (32.8 ft.), said SiLC. This is useful for warehouse automation and machine vision applications, it noted.

Dexterity uses sensors for truck loading, unloading

For instance, said SiLC Technologies, AI-driven palletizing robots equipped with the Eyeonic Mini can view and interact with pallets, optimize package placement, and efficiently and safely load them onto trucks. With more than 13 million commercial trucks in the U.S., this technology promises to significantly boost efficiency in loading and unloading processes, the company said.

Dexterity Inc. said it is working to give robots the intelligence to see, move, touch, think and learn, freeing human workers for other warehouse and logistics tasks. The Redwood City, Calif.-based company is incorporating SiLC’s technology into its autonomy platform.

“At Dexterity, we focus on AI, machine learning, and robotic intelligence to make warehouses more productive, efficient and safe,” said CEO Samir Menon. “We are excited to partner with SiLC to unlock lidar for the robotics and logistics markets.”

“Their technology is a revolution in depth sensing and will enable easier and faster adoption of warehouse automation and robotic truck load and unload,” he said.

At CES this week in Las Vegas, SiLC Technologies is demonstrating the new Eyeonic Mini in private meetings at the Westgate Hotel. For more information or to schedule an appointment, e-mail SiLC at contact@SiLC.com.

The post Eyeonic Vision System Mini unveiled by SiLC Technologies at CES 2024 appeared first on The Robot Report.

]]>
https://www.therobotreport.com/silc-launches-eyeonic-mini-ces-2024/feed/ 0
NVIDIA Jetson supports Zipline drone deliveries, as Omniverse enables Amazon digital twins https://www.therobotreport.com/nvidia-jetson-supports-zipline-drone-deliveries-as-omniverse-enables-amazon-digital-twins/ https://www.therobotreport.com/nvidia-jetson-supports-zipline-drone-deliveries-as-omniverse-enables-amazon-digital-twins/#respond Tue, 19 Dec 2023 16:15:41 +0000 https://www.therobotreport.com/?p=568936 NVIDIA technologies are helping supply chains add new levels of automation, as seen in its work with Adobe, Amazon, and Zipline.

The post NVIDIA Jetson supports Zipline drone deliveries, as Omniverse enables Amazon digital twins appeared first on The Robot Report.

]]>
NVIDIA Jetson Xavier NX processes sensor inputs for the P1 delivery drone.

NVIDIA Jetson Xavier NX processes sensor inputs for the P1 delivery drone. Source: Zipline

Robotics, simulation, and artificial intelligence are providing new capabilities for supply chain automation. For example, Zipline International Inc. drone deliveries and Amazon Robotics digital twins for package handling demonstrate how NVIDIA Corp. technologies can enable industrial applications.

“You can pick the right place for your algorithms to run to make sure you’re getting the most out of the hardware and the power that you are putting into the system,” said A.J. Frantz, navigation lead at Zipline, in a case study.

NVIDIA claimed that its Jetson Orin modules can perform up to 275 trillion operations per second (TOPS) to provide mission-critical computing for autonomous systems in everything from delivery services and agriculture to mining and undersea exploration. The Santa Clara, Calif.-based company added that Jetson’s energy efficiency can help businesses electrify their vehicles and reduce carbon emissions to meet sustainability goals.

Zipline drones rely on Jetson Xavier NX to avoid obstacles

Founded in 2011, Zipline said it has completed more than 800,000 deliveries of food, medication, and more in seven countries. The San Francisco-based company said its drones have flown over 55 million miles using NVIDIA Jetson edge AI platform for autonomous navigation and landings.

Zipline, which raised $330 million in April at a valuation of $4.2 billion, is a member of the NVIDIA Inception program, in which startups can get technology support. The company’s Platform One, or P1, drone uses Jetson Xavier NX system-on-module (SOM) to process sensor inputs.

“The NVIDIA Jetson module in the wing is part of what delivers our acoustic detection and avoidance system, so it allows us to listen for other aircraft in the airspace around us and plot trajectories that avoid any conflict,” Frantz explained.

Zipline’s fixed-wing drones can fly out more than 55 miles (88.5 km), at 70 mph (112.6 kph) from several distribution centers and then return. Capable of hauling up to 4 lb. (1.8 kg) of cargo, they autonomously fly and release packages at their destinations by parachute.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


P2 hybrid drone includes Jetson Orin NX for sensor fusion, safety

Zipline’s Platform Two, or P2, hybrid drone can fly fast on fixed-wing flights, as well as hover. It can carry 8 lb. (3.6 kg) of cargo for 10 miles (16 km), as well as a droid that can be lowered on a tether to precisely place deliveries. It’s intended for use in dense, urban environments.

The P2 uses two Jetson Orin NX modules. One is for sensor fusion system to understand environments. The other is in the droid for redundancy and safety.

Zipline claimed that its drones, nicknamed “Zips,” can deliver items 7x faster than ground vehicles. It boasted that it completes one delivery every 70 seconds globally.

“Our aircraft fly at 70 miles per hour, as the crow flies, so no traffic, no waiting at lights — we’re talking minutes here in terms of delivery times,” said Joseph Mardall, head of engineering at Zipline. “Single-digit minutes are common for deliveries, so it’s faster than any alternative.”

In addition to transporting pizza, vitamins, and medications, Zipline works with Walmart, restaurant chain Sweetgreen, Michigan Medicine, MultiCare Health Systems, Intermountain Health, and the government of Rwanda, among others. It delivers to more than 4,000 hospitals and health centers.

Amazon uses Omniverse, Adobe Substance 3D for realistic packages

For warehouse robots to be able to handle a wide range of packages, they need to be trained on massive but realistic data sets, according to Amazon Robotics.

“The increasing importance of AI and synthetic data to run simulation models comes with new challenges,” noted Adobe Inc. in a blog post. “One of these challenges is the creation of massive amounts of 3D assets to train AI perception programs in large-scale, real-time simulations.”

Amazon Robotics turned to Adobe Substance 3D, Universal Scene Description (USD), and NVIDIA Omniverse to develop random but realistic 3D environments and thousands of digital twins of packages for training AI models.

NVIDIA Omniverse integrates with Adobe Substance 3D to generate realistic package models.

NVIDIA Omniverse integrates with Adobe Substance 3D to generate realistic package models for training robots. Source: Adobe

NVIDIA Omniverse allows simulations to be modified, shared

“The Virtual Systems Team collaborates on a wide range of projects, encompassing both extensive solution-level simulations and individual workstation emulators as part of larger solutions,” explained Hunter Liu, technical artist at Amazon Robotics.

“To describe the 3D worlds required for these simulations, the team utilizes USD,” he said. “One of the team’s primary focuses lies in generating synthetic data for training machine learning models used in intelligent robotic perception programs.”

The team uses Houdini for procedural mesh generation and Substance 3D Designer for texture generation and loading virtual boxes into Omniverse, added Haining Cao, a texturing artist at Amazon Robotics.

The team has developed multiple workflows to represent the vast variety of packages that Amazon handles. It has gone from generating two to 300 assets per hour, said Liu.

“To introduce further variations, we utilize PDG (Procedural Dependency Graph) within Houdini,” he noted. “PDG enables us to efficiently batch process multiple variations, transforming the Illustrator files into distinct meshes and textures.”

After generating the synthetic data and publishing the results to Omniverse, the Adobe-NVIDIA integration enables Amazon’s team to change parameters to, for example, simulate work cardboard. The team can also use Python to trigger randomized values and collaborate on the data within Omniverse, said Liu.

In addition, Substance 3D includes features for creating “intricate and detailed textures while maintaining flexibility, efficiency, and compatibility with other software tools,” he said. Simulation-specific extensions bundled with NVIDIA Isaac Sim allow for further generation of synthetic data and live simulations using robotic manipulators, lidar, and other sensors, Liu added.

The post NVIDIA Jetson supports Zipline drone deliveries, as Omniverse enables Amazon digital twins appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-jetson-supports-zipline-drone-deliveries-as-omniverse-enables-amazon-digital-twins/feed/ 0