Software / Simulation Archives - The Robot Report https://www.therobotreport.com/category/software-simulation/ Robotics news, research and analysis Wed, 26 Jun 2024 17:58:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Software / Simulation Archives - The Robot Report https://www.therobotreport.com/category/software-simulation/ 32 32 Formic brings in $27.4M to expand its global fleet, increase support network https://www.therobotreport.com/formic-brings-in-27-4m-to-expand-its-global-fleet-increase-support-network/ https://www.therobotreport.com/formic-brings-in-27-4m-to-expand-its-global-fleet-increase-support-network/#respond Tue, 25 Jun 2024 12:01:54 +0000 https://www.therobotreport.com/?p=579545 Since its previous funding, Formic's fleet of robotic equipment has completed 100,000 production hours at more than 99% uptime.

The post Formic brings in $27.4M to expand its global fleet, increase support network appeared first on The Robot Report.

]]>
A Formic-branded cobot picking up two boxes from a conveyor belt.

Formic Stack address labor shortages with automated palletizing, while Formic Pack automates placement of prepacked goods into containers. | Source: Formic Technologies

Formic Technologies Inc. today announced that it raised another $27.4 million in Series A funding. This builds on the company’s January 2022 funding round, bringing the company’s total Series A funding to $52 million. 

“[During] the first Series A, we were really at the early stages of kind of figuring out the business,” Formic co-founder and CEO Saman Farid told The Robot Report. “We had probably fewer than 10 deployments. We had some early customers; we had some data around how much productivity our robots can bring to our customers. But it was really kind of anecdotal, and in the roughly two years since then, I would say we’ve really kind of figured out a lot of those questions and have a lot more to show.”

The Chicago-based company offers its systems to U.S. manufacturers through a robots-as-a-service (RaaS) model, in which it delivers support for automation at an hourly rate. This support includes deploying the systems and providing continuous monitoring and maintenance throughout the engagement to ensure success. 

Since its last funding, Formic said its robotic fleet has completed 100,000 production hours at more than 99% uptime. The company said it expects its equipment to gain another 100,000 production hours in the next 170 days. 

Many manufacturers are still new to robotics

Seventy-five percent of Formic’s customers have never used robots before adopting its platform, estimated Farid. The company aims to help more businesses realize the benefits of automation, and it has its work cut out for it. An MIT report found that only 10% of U.S. manufacturers use automation in their production facilities. 

At the same time, the sector will need as many as 3.8 million new employees by 2033. An estimated 1.9 million of these jobs could go unfilled, making automation more important than ever, said industry analysts.

“Manufacturers have been struggling over the last five to 10 years to complete with global wages,” Farid said. “It’s much harder when they don’t have labor available.”

“Because of our robots, a lot of our customers are actually winning more and more business,” he asserted. “They’re able to grow, instead of what they’ve spent the last 10 years doing, which is trying not to die.”

Formic plans to use funding for expansion

“In the coming years, we plan to grow more globally. We plan to expand to new regions, both in the U.S. and outside the U.S.,” Farid said. “We are going to grow our sales and deployment and maintenance team across all of those areas so that we can support customers in all those regions.” 

With the new funding, Formic plans to expand its fleet of standardized equipment. The company said this will allow it to provide more automation to more manufacturers, deploy more rapidly, and offer shorter lead times.

Formic said it also hopes to increase its network of support experts across the U.S., enabling faster customer responses while upholding its maintenance service-level agreements (SLAs).

“That [funding] is really going towards spending more capital, on building more regional capabilities; building more footprint; and building more software tools to make it faster, cheaper, easier, and more reliable to deploy these robots,” Farid said. 

In addition, Formic said it will enhance its equipment-agnostic software that uses AI for motion planning, predictive maintenance, system design, and creating more intuitive customer interfaces and dashboards. 

“We just started our AI product tools, and we’re starting to see a huge impact,” said Farid. “Because of all the data that we are collecting from all of our deployed robots, we believe we have the biggest and best data set out there to train world-class robotics and AI models.”

“One of them is called Formic Core, which is basically like an operating system for robots, and it allows us to really quickly configure the robot to do any task in any location,” he explained. “We have another set of tools called Fast Formic Automation Software Tools, which are basically tools for really rapid site evaluation.”

“It uses that lidar scan of a customer site to quickly generate the right robot arms and design for the entire robot work cell, which is something that’s really unique,” Farid said. “We’re able to basically cut our deployment costs nearly in half because of all the AI and software tools we’ve built. We’ve also increased the ability of our crew to maintain those robots efficiently.”

Investors bring more than just capital to the table

Blackhorn Ventures led Formic’s funding round, which also included participation from Mitsubishi HC Capital America, NEC, Translink Capital, Alumni Ventures, FJ Labs, Lux Capital, Initialized Capital, and Lorimer Ventures.

“We chose to work with [Blackhorn] because they are very focused on industrial technology, and they have a group of investors in their fund that are large industrial conglomerates from around the world,” Farid said. “So, we’re able to get a lot of mutual support from them. In addition to just putting capital in, they actually are able to really help us with growing our customer base and getting more access.” 

Formic also announced a joint commercial agreement with Mitsubishi HC Capital and U.S.-based Group Company Mitsubishi HC Capital America. The two companies will collaborate to source and finance the entire lifecycle of Formic’s RaaS model, an all-encompassing managed system for manufacturing automation.

The post Formic brings in $27.4M to expand its global fleet, increase support network appeared first on The Robot Report.

]]>
https://www.therobotreport.com/formic-brings-in-27-4m-to-expand-its-global-fleet-increase-support-network/feed/ 0
RTI Connext to deliver real-time data connectivity to NVIDIA Holoscan https://www.therobotreport.com/rti-connext-delivers-real-time-data-connectivity-nvidia-holoscan/ https://www.therobotreport.com/rti-connext-delivers-real-time-data-connectivity-nvidia-holoscan/#respond Tue, 25 Jun 2024 12:00:50 +0000 https://www.therobotreport.com/?p=579541 RTI Connext provides reliable communications for users of NVIDIA's Holoscan SDK to speed development of devices such as surgical robots.

The post RTI Connext to deliver real-time data connectivity to NVIDIA Holoscan appeared first on The Robot Report.

]]>
RTI Connext and NVIDIA Holoscan can help medical device developers.

Medical device developers can now use RTI Connext and NVIDIA Holoscan. Source: Real-Time Innovations

Devices such as surgical robots need access to distributed, reliable, and continuous data streaming across different sensors and devices. Real-Time Innovations, or RTI, today said it is collaborating with NVIDIA Corp. to deliver real-time data connectivity for the NVIDIA Holoscan software development kit with RTI Connext.

“Connectivity is the foundation for cutting-edge technologies, such as AI, that are transforming the medtech industry and beyond,” stated Darren Porras, market development manager for medical at Real-Time Innovations. “We’re proud to work with NVIDIA to harness the transformative power of AI to revolutionize healthcare.”

“By providing competitive, tailored solutions, we are paving the way for sustainable business value across the healthcare, automotive, and industrial sectors, marking an important step toward a future where technology enhances the quality of life and drives innovation,” he added.

Founded in 1991, Real-Time Innovations claimed that it has 2,000 customer designs and that its software runs more than 250 autonomous vehicle programs, controls North America’s largest power plants, and integrates over 400 defense programs. The Sunnyvale, Calif.-based company said its systems also support next-generation medical technologies and surgical robots, Canada’s air traffic control, and NASA’s launch-control systems.

RTI Connext designed to reliably distribute data

The RTI Connext software framework enables users to build intelligent distributed systems that combine advanced sensing, fast control, and artificial intelligence algorithms, said Real-Time Innovations. This can help developers bring capable systems to market faster, it said.

“Connext facilitates interoperable and real-time communication for complex, intelligent systems in the healthcare industry and beyond,” according to RTI. It is based on the Data Distribution Service (DDS) standard and has been proven across industries to reliably communicate data, the company said.

Product teams can now efficiently build and deploy AI-enabled applications and distributed systems that require low-latency and reliable data sharing for sensor and video processing. Connext, which is available for free trials, allows applications to work together as one, said RTI.

NVIDIA Holoscan gets advanced data flows

RTI Connext provides a connectivity framework for the NVIDIA Holoscan software development kit (SDK), offering integration across various systems and sensors to complement its AI capabilities. 

“Enterprises are looking for advanced software-defined architectures that deliver on low latency, flexibility, reliability, scalability, and cybersecurity,” said David Niewolny, director of business development for healthcare and medical at NVIDIA. “With RTI Connext and NVIDIA Holoscan, medical technology developers can accelerate their software-defined product visions by leveraging infrastructure purpose-built for healthcare applications.”

Connext now integrates with NVIDIA’s AI sensor-processing pipelines and reference workflows, bolstering data flows and real-time AI processing across a system of systems. With capabilities for real-time visualization and data-driven insights, the technologies can help drive more precise and automated minimally invasive procedures, clinical monitoring, and next-generation medical imaging platforms. They can also help developers create smarter, integrated systems across industries, said the partners.

NVIDIA said Holoscan offers the software and hardware needed to build AI applications and deploy sensor-processing capabilities from edge to cloud. This can help companies explore new capabilities, accelerate time to market, and lower costs, said the Santa Clara, Calif.-based company.

NVIDIA Holoscan now supports interoperability with a wide range of legacy systems, such as Windows-based medical devices, real-time operating system nodes in surgical robots, and patient-monitoring systems, through RTI Connext.

The post RTI Connext to deliver real-time data connectivity to NVIDIA Holoscan appeared first on The Robot Report.

]]>
https://www.therobotreport.com/rti-connext-delivers-real-time-data-connectivity-nvidia-holoscan/feed/ 0
GrayMatter raises $45M Series B to ease robot programming for manufacturers https://www.therobotreport.com/graymatter-robotics-raises-45m-series-b-ease-programming-manufacturers/ https://www.therobotreport.com/graymatter-robotics-raises-45m-series-b-ease-programming-manufacturers/#comments Thu, 20 Jun 2024 15:00:15 +0000 https://www.therobotreport.com/?p=579498 GrayMatter Robotics says its AI-based systems can double or quadruple productivity as its customer base grows and hires more staffers.

The post GrayMatter raises $45M Series B to ease robot programming for manufacturers appeared first on The Robot Report.

]]>
GrayMatter robot at Lawrence Brothers.

GrayMatter automated manually intensive tasks at Lawrence Brothers. Source: GrayMatter Robotics

Like businesses in other industries, U.S. manufacturers face widening labor shortfalls and need automation to help fill those gaps. GrayMatter Robotics today announced that it has raised $45 million in Series B funding. The Carson, Calif.-based company said it plans to use the investment to expand to meet customer demand. 

“We founded GrayMatter to enhance productivity while prioritizing workforce well-being,” stated Ariyan Kabir, co-founder and CEO of GrayMatter Robotics. “With our physics-based AI-powered systems, we are fulfilling our mission while unlocking new levels of efficiency. With our investors’ support, we are making a real difference for shop workers and addressing the critical labor shortages in manufacturing today.”

GrayMatter Robotics said it bundles proprietary artificial intelligence with off-the-shelf robots, sensors, and tools for application-specific, turnkey solutions. The company offers its systems through a robotics-as-a-service (RaaS) model and said they relieve shop floor workers of tedious and ergonomically challenging tasks. They can also enhance production capacity and reduce scrap, repair, and rework costs, it said. 

GrayMatter applies AI to production tasks

The $2.5 trillion U.S. manufacturing industry is grappling with a growing backlog of unfilled orders due to a severe labor shortage. Many of these roles are hazardous and demand extensive training, leading to a critical gap of 3.8 million unfilled jobs, according to Deloitte.

SK Gupta, Ariyan Kabir, and Brual Shah founded GrayMatter Robotics in 2020. The company said it holds 10 patents and has processed more than 7.5 million sq. ft. (about 700,000 sq. m) of product surface area.

GrayMatter said its proprietary GMR-AI technology enables robots to self-program and adapt to high-mix, high-variability manufacturing environments, providing consistent quality and reducing cycle times. 

Smart workcells with GrayMatter technology can autonomously handle complex tasks such as sanding, polishing, grinding, coating, and finishing, it added. By automating such jobs, businesses can meet global demand while also improving the quality of life for their workers, said the company. 

Products including Scan&Sand, Scan&Polish, Scan&Buff, and Scan&Grind can increase quality and consistency while reducing costs, said GrayMatter. Manufacturers can benefit a system availability of 95% to 98%, and most contingencies can be resolved in under five minutes, it said.

GrayMatter claimed that its systems can work two to four times faster than manual operations and that employee training that used to take six months can now be done in less than a day. In addition, the company said its robots can help businesses address sustainability goals by reducing consumption and consumable waste by 30% or more over traditional methods.

GrayMatter combines AI and robotics to improve finishing efficiency and reduce waste.

GrayMatter combines AI and robotics to improve finishing efficiency and reduce waste. Source: GrayMatter Robotics

Users report increased efficiencies

Over the past two years, GrayMatter Robotics has deployed robots across North America in aerospace, defense, specialty vehicles, marine, recreation, metal fabrication, and consumer products. The company said its RaaS model helps manufacturers enhance production capacity and reduce costs associated with scrap, repair, and rework.

“We are excited to partner with GrayMatter Robotics, as their AI-driven robotic solutions have enabled us to more efficiently address major demand growth in our operations stemming from increased football participation and market-share gains, ensuring consistent quality and throughput despite workforce staffing challenges,” said Drew Dixon, director of distribution and strategy at sports equipment maker Riddell.

“Collaborating with GrayMatter Robotics underscores Riddell’s ongoing commitment to innovation and excellence in both its manufacturing operations as well as the protective equipment it delivers to the field,” he added.

“GrayMatter helps us replace some of our more taxing manual labor,” said Melanie Protti-Lawrence, president of steel fabricator Lawrence Brothers Inc. “We are proud to partner with GrayMatter in an effort to provide longevity in the workforce. We’re constantly working toward a healthier work-life balance, with a focus on working to live rather than living to work.”

“Their robots are not just tools but [also] enablers of growth,” she said. “They allow our workers to engage in more meaningful and less physically taxing tasks, contributing to a healthier and more productive work environment.”

Investors help GrayMatter to grow

With the new capital, GrayMatter is actively hiring for a wide range of roles to meet customer demands, expanding its Los Angeles headquarters, and accelerating the development and deployment of its next-generation AI-powered robots.

Wellington Management led the Series B round, which also included NGP Capital, Euclidean Capital, Advance Venture Partners, and SQN Venture Partners. They joined existing investors 3M Ventures, B Capital, Bow Capital, Calibrate Ventures, OCA Ventures, and Swift Ventures.

“GrayMatter is driving a pivotal transformation in manufacturing with their advanced AI solutions,” said Sean Petersen, sector lead for private climate investing at Wellington Management. “Their ability to enhance productivity, energy efficiency and safety while managing costs, positions them uniquely in the market.”

Wellington Management Co. advises 2,500 clients in more than 60 countries. The Boston-based firm manages more than $1.2 trillion for clients, including pensions, endowments and foundations, insurers, and global wealth managers.

Wellington’s Private Investing Team has raised more than $8.5 billion in global assets, and it invests in multiple sectors and technologies. The team includes more than 1,000 professionals with private market experience with public market expertise, extensive networks, and robust research to benefit both investors and entrepreneurs.

“The combination of AI-driven technology and depth of domain expertise in the GrayMatter solution blew us away,” said Debjit Mukerji, partner at NGP Capital. “It is incredibly challenging to develop high-performance and ultra-reliable robots for such difficult manufacturing conditions.”

“Going to market with GrayMatter Robotics aligns with our mission to foster innovative solutions that drive efficiency and sustainability in manufacturing,” said Adi Leviatan, president of 3M’s Abrasives Division. “This technology addresses critical industry challenges and delivers significant value to our customers.”

The post GrayMatter raises $45M Series B to ease robot programming for manufacturers appeared first on The Robot Report.

]]>
https://www.therobotreport.com/graymatter-robotics-raises-45m-series-b-ease-programming-manufacturers/feed/ 1
Only 16% of manufacturers has real-time visibility into production, says Zebra https://www.therobotreport.com/zebra_finds_only-16-percent-manufacturers-has-visibility-production/ https://www.therobotreport.com/zebra_finds_only-16-percent-manufacturers-has-visibility-production/#respond Thu, 20 Jun 2024 14:21:29 +0000 https://www.therobotreport.com/?p=579503 Manufacturers want more visibility into processes and to reskill staffers to work with automation, found Zebra and Azure Knowledge.

The post Only 16% of manufacturers has real-time visibility into production, says Zebra appeared first on The Robot Report.

]]>
Zebra's portfolio includes Fetch mobile robots for parts fulfillment.

Zebra’s portfolio includes FlexShelf robots for parts fulfillment. Source: Zebra Technologies

Only 1 in 6 manufacturers has a clear understanding of its own processes, according to a new study from Zebra Technologies Corp. The report also found that 61% of manufacturers expect artificial intelligence to drive growth by 2029, up from 41% in 2024.

Zebra said the surge in AI interest, along with 92% of survey respondents prioritizing digital transformation, demonstrates manufacturers’ intent to improve data management and use new technologies that enhance visibility and quality throughout production.

“Manufacturers struggle with using their data effectively, so they recognize they must adopt AI and other digital technology solutions to create an agile, efficient manufacturing environment,” stated Enrique Herrera, industry principal for manufacturing at Zebra Technologies. “Zebra helps manufacturers work with technology in new ways to automate and augment workflows to achieve a well-connected plant floor where people and technology collaborate at scale.”

Zebra commissioned Azure Knowledge Corp. to conduct 1,200 online surveys among C-suite executives and IT and OT (information and operational technology) leaders within various manufacturing sectors. They included automotive, electronics, food and beverage, pharmaceuticals, and medical devices. Respondents were surveyed in Asia, Europe, Latin America, and North America.

The fully connected factory is elusive

Although manufacturers said digital transformation is a strategic priority, achieving a fully connected factory remains elusive, noted Zebra Technologies. The company asserted that visibility is key to optimizing efficiency, productivity, and quality on the plant floor.

However, only 16% of manufacturing leaders globally reported they have real-time, work-in-progress (WIP) monitoring across the entire manufacturing process, reported the 2024 Manufacturing Vision Study.

While nearly six in 10 manufacturing leaders said they expect to increase visibility across production and throughout the supply chain by 2029, one-third said getting IT and OT to agree on where to invest is a key barrier to digital transformation.

In addition, 86% of manufacturing leaders acknowledged that they are struggling to keep up with the pace of technological innovation and to securely integrate devices, sensors, and technologies throughout their facilities and supply chain. Zebra claimed that enterprises can use its systems for higher levels of security and manageability, as well as new analytics to elevate business performance.

Technology can augment workforce efficiency

Manufacturers are shifting their growth strategies by integrating and augmenting workers with AI and other technologies over the next five years, found Zebra’s study. Nearly three-quarters (73%) said they plan to reskill labor for data and technology usage, and seven in 10 said they expect to augment workers with mobility-enabling technology.

Manufacturers are implementing tools including tablets (51%), mobile computers (55%), and workforce management software (56%). In addition, 61% of manufacturing leaders said they plan to deploy wearable mobile computers.

Across the C-suite, IT, and OT understand how labor initiatives must extend beyond improving worker efficiency and productivity with technology. Six in 10 leaders ranked ongoing development, retraining/upskilling, and career path development to attract future talent as high priorities for their organizations.

Automation advances to optimize quality

The quest for quality has intensified as manufacturers across segments must do more with fewer resources. According to Zebra and Azure’s survey, global manufacturers said today’s most significant quality management issues are real-time visibility (33%), keeping up with new standards and regulations (29%), integrating data (27%), and maintaining traceability (27%).

Technology implementation plans are addressing these challenges. Over the next five years, many executives said they plan to implement robotics (65%), machine vision (66%), radio frequency identification (RFID; 66%), and fixed industrial scanners (57%).

Most survey respondents agreed that these automation decisions are driven by factors including the need to provide the workforce with high-value tasks (70%), meet service-level agreements (SLAs; 69%), and add more flexibility to their plant floors (64%).

Zebra Technologies shares regional findings

  • Asia-Pacific (APAC): While only 30% of manufacturing leaders said they use machine vision across the plant floor in APAC, 67% are implementing or planning to deploy this technology within the next five years.
  • Europe, the Middle East, and Africa (EMEA): In Europe, reskilling labor to enhance data and technology usage skills was the top-ranked workforce strategy for manufacturing leaders to drive growth today (46%) and in five years (71%).
  • Latin America (LATAM): While only 24% of manufacturing leaders rely on track and trace technology in LATAM, 74% are implementing or plan to implement the technology in the next five years.
  • North America: In this region, 68% of manufacturing leaders ranked deploying workforce development programs as their most important labor initiative.
Zebra shares results of manufacturing vision study and the connected factory.

The Manufacturing Vision Study provided insights around digitalization and the connected factory. Source: Zebra Technologies

Zebra to discuss digital transformation

While digital transformation is a priority for manufacturers, achieving it is fraught with obstacles, including the cost and availability of labor, scaling technology solutions, and the convergence of IT and OT, according to Zebra Technologies. The Lincolnshire, Ill.-based company said visibility is the first step to such transformation.

Emerging technologies such as robotics and AI enable manufacturers to use data to identify, react, and prioritize problems and projects so they can deliver incremental efficiencies that yield the greatest benefits, Zebra said. The company said it provides systems to enable businesses to intelligently connect data, assets, and people.

Zebra added that its portfolio, which includes software, mobile robots, machine vision, automation, and digital decisioning, can help boost visibility, optimize quality, and augment workforces. It has more than 50 years of experience in scanning, track-and-trace, and mobile computing systems.

The company has more than 10,000 partners across over 100 countries, as well as 80% of the Fortune 500 as customers. Zebra is hosting a webinar today about how to overcome top challenges to digitalization and automation.

The post Only 16% of manufacturers has real-time visibility into production, says Zebra appeared first on The Robot Report.

]]>
https://www.therobotreport.com/zebra_finds_only-16-percent-manufacturers-has-visibility-production/feed/ 0
Vecna Robotics raises more than $100M, hires COO to expand warehouse automation https://www.therobotreport.com/vecna-robotics-raises-100m-hires-coo-expand-warehouse-automation/ https://www.therobotreport.com/vecna-robotics-raises-100m-hires-coo-expand-warehouse-automation/#respond Thu, 20 Jun 2024 10:01:06 +0000 https://www.therobotreport.com/?p=579501 Vecna Robotics has more than doubled its valuation and hired a chief operating officer as it develops a case-picking system.

The post Vecna Robotics raises more than $100M, hires COO to expand warehouse automation appeared first on The Robot Report.

]]>
Vecna offers warehouses robotic tuggers, lift trucks, and pallet jacks.

Vecna offers warehouses robotic tuggers, lift trucks, and pallet jacks. Source: Vecna Robotics

WALTHAM, Mass. — Although investment in robotics dipped in the past year, suppliers with proven products and business models have been finding funding. Vecna Robotics today announced the close of its Series C round at $100 million, with $40 million in new funding including equity and debt. The financing nearly doubles the company valuation since its Series B round.

“Finalizing this capital raise, with the help of our existing investors and a new financing partner, is huge validation that we are on the right track,” stated Craig Malloy, CEO of Vecna Robotics. “With fresh capital secured, we have the balance sheet to help us drive growth with our existing customers through improved product performance and the release of new automation technology that will change the game for material handling in warehousing and distribution.”

Vecna Robotics said its autonomous mobile robots (AMRs), Pivotal orchestration software, and round-the-clock Command Center can help supply chains automate critical workflows and maximize throughput at scale. The company has tightened its focus to self-driving forklifts, pallet jacks, and tuggers to address widespread labor shortages.

Vecna and GEODIS to automate case picking

Over the past year, Vecna Robotics has combined cloud software updates and investments in its Pivotal Command Center to help customers such as GEODIS, FedEx, Caterpillar, and Shape. They have realized as much as 70% performance improvements in ground-to-ground warehouse workflows including case picking, packaging, and cross-docking, it said.

Vecna said the cash infusion will support the launch of platforms that will enable it to “provide more deployment flexibility and reach into new workflows that are in high demand, while being able to continue delivering operator cost savings from Day 1.”

“GEODIS has been working with Vecna Robotics on the development of a new, groundbreaking case-picking solution that nearly doubles performance,” said Andy Johnston, senior director of innovation at GEODIS. “We are counting on this recent cash infusion at the company to speed up development and launch of a complete, market-ready offering that can be deployed right away.”

Vecna tests in house to ensure reliability

The Robot Report recently visited Vecna Robotics headquarters to see its “bowling alley,” where it tests its AMRs around the clock. The company tests capabilities including the “handshake” between its robots and conveyors.

For instance, during a demonstration, Vecna tested its Co-bot Pallet Jack (CPJ) picking up and dropping off heavy loads beyond what customers typically need. It tracks runtimes and multiple maneuvers, and a staffer stays overnight mainly to swap batteries.

“We’re always pleasantly surprised at our low failure rates,” observed Mark Fox, director of validation at Vecna. “We can replicate the conditions of a typical customer site, including obstacles. We analyze the scene and look at multiple pickups and drop offs so that performance doesn’t drop.”

Vecna has also developed sensing at height, enabled robots to accept some variance from existing maps, and participated in the MassRobotics interoperability standards effort.

“We’re working on applying our technology and data to case picking in addition to pallet movement,” explained test engineer Chinonso Ovuegbe.

What products and sectors are seeing the most demand?

“Self-driving forklifts and tuggers are our most popular products,” replied Fox. “Robotics-as-a-service [RaaS] per month is also popular, but some customers buy our products outright. Third-party logistics providers [3PLs] and automotive are booming.”

Automated forklift with full pallet load at Vecna's headquarters.
Automated forklift with full pallet load at Vecna’s headquarters. Source: Vecna Robotics

Investment and new COO to enable growth

Tiger Global Management, Proficio Capital Partners, and IMPULSE participated in Vecna Robotics’ Series C round. The company said the funding will allow it to deliver rapid returns on investment (ROI) to cost-conscious warehouse operators served by the $165 billion pallet-moving autonomy market.

To support its rapid expansion, Vecna also announced the appointment of Michael Helmbrecht as chief operating officer. He will oversee operations, manufacturing, IT, product, and customer success to ensure that the company continues to meet its customer-defined performance guarantees.

Helmbrecht has nearly 20 years of operations, product, and partnership experience from executive roles at Dell, Lifesize, and Ring Central. He joins Vecna after a year of triple-digit revenue growth, an over 100% increase in deployments, and the announcement of an industry-leading performance guarantee.

Automated pallet truck with full pallet load.
Automated pallet truck with full pallet load. Source: Vecna Robotics

The post Vecna Robotics raises more than $100M, hires COO to expand warehouse automation appeared first on The Robot Report.

]]>
https://www.therobotreport.com/vecna-robotics-raises-100m-hires-coo-expand-warehouse-automation/feed/ 0
Wayve launches PRISM-1 4D reconstruction model for autonomous driving https://www.therobotreport.com/wayve-launches-prism-1-4d-reconstruction-model-for-autonomous-driving/ https://www.therobotreport.com/wayve-launches-prism-1-4d-reconstruction-model-for-autonomous-driving/#respond Tue, 18 Jun 2024 17:15:35 +0000 https://www.therobotreport.com/?p=579482 Wayve says PRISM-1 enables scalable, realistic re-simulations of complex scenes with minimal engineering or labeling input. 

The post Wayve launches PRISM-1 4D reconstruction model for autonomous driving appeared first on The Robot Report.

]]>
A scene reconstructed by Wayve's PRISM-1 technology.

A scene reconstructed by Wayve’s PRISM-1 technology. | Source: Wayve

Wayve, a developer of embodied artificial intelligence, launched PRISM-1, a 4D reconstruction model that it said can enhance the testing and training of its autonomous driving technology. 

The London-based company first showed the technology in December 2023 through its Ghost Gym neural simulator. Wayve used novel view synthesis to create precise 4D scene reconstructions (three dimensions in space plus time) using only camera inputs.

It achieved this using unique methods that it claimed will accurately and efficiently simulating the dynamics of complex and unstructured environments for advanced driver-assist systems (ADAS) and self-driving vehicles. PRISM-1 is the model that powers the next generation of Ghost Gym simulations. 

“PRISM-1 bridges the gap between the real world and our simulator,” stated Jamie Shotton, chief scientist at Wayve. “By enhancing our simulation platform with accurate dynamic representations, Wayve can extensively test, validate, and fine-tune our AI models at scale.”

“We are building embodied AI technology that generalizes and scales,” he added. “To achieve this, we continue to advance our end-to-end AI capabilities, not only in our driving models, but also through enabling technologies like PRISM-1. We are also excited to publicly release our WayveScenes101 dataset, developed in conjunction with PRISM-1, to foster more innovation and research in novel view synthesis for driving.”

PRISM-1 excels at realism in simulation, Wayve says

Wayve said PRISM-1 enables scalable, realistic re-simulations of complex driving scenes with minimal engineering or labeling input. 

Unlike traditional methods, which rely on lidar and 3D bounding boxes, PRISM-1 uses novel synthesis techniques to accurately depict moving elements like pedestrians, cyclists, vehicles, and traffic lights. The system includes precise details, like clothing patterns, brake lights, and windshield wipers. 

Achieving realism is critical for building an effective training simulator and evaluating driving technologies, according to Wayve. Traditional simulation technologies treat vehicles as rigid entities and fail to capture safety-critical dynamic behaviors like indicator lights or sudden braking. 

PRISM-1, on the other hand, uses a flexible framework that can identify and track changes in the appearance of scene elements over time, said the company. This enables it to precisely re-simulate complex dynamic scenarios with elements that change in shape and move throughout the scene. 

It can distinguish between static and dynamic elements in a shelf-supervised manner, avoiding the need for explicit labels, scene graphs, and bounding boxes to define the configuration of a busy street.

Wayve said this approach maintains efficiency, even as scene complexity increases, ensuring that more complex scenarios do not require additional engineering effort. This makes PRISM-1 a scalable and efficient system for simulating complex urban environments, it asserted.

WayveScenes 101 benchmark released

Wayve also released its WayveScenes 101 Benchmark. This dataset comprises 101 diverse driving scenarios from the U.K. and the U.S. It includes urban, suburban, and highway scenes over various weather and lighting conditions. 

The company says it aims for this dataset to support the AI research community in advancing novel view synthesis models and the development of more robust and accurate scene representation models for driving. 

Last month, Wayve closed a $1.05 billion Series C funding round. SoftBank Group led the round, which also included new investor NVIDIA and existing investor Microsoft.

Since its founding, Wayve has developed and tested its autonomous driving system on public roads. It has also developed foundation models for autonomy, similar to “GPT for driving,” that it says can empower any vehicle to perceive its surroundings and safely drive through diverse environments. 

The post Wayve launches PRISM-1 4D reconstruction model for autonomous driving appeared first on The Robot Report.

]]>
https://www.therobotreport.com/wayve-launches-prism-1-4d-reconstruction-model-for-autonomous-driving/feed/ 0
Realtime Robotics celebrates motion-planning collaboration with Mitsubishi Electric https://www.therobotreport.com/realtime-robotics-celebrates-collaboration-with-mitsubishi-electric/ https://www.therobotreport.com/realtime-robotics-celebrates-collaboration-with-mitsubishi-electric/#respond Tue, 18 Jun 2024 16:05:44 +0000 https://www.therobotreport.com/?p=579481 Realtime Robotics is bringing its motion planning for industrial and collaborative robots to market with Mitsubishi Electric.

The post Realtime Robotics celebrates motion-planning collaboration with Mitsubishi Electric appeared first on The Robot Report.

]]>
Realtime Robotics demonstrates a multi-robot workcell during Mitsubishi Electric's visit to its headquarters.

Realtime Robotics demonstrates a multi-robot workcell during Mitsubishi Electric’s visit. Credit: Eugene Demaitre

BOSTON — As factories and warehouses look to automate more of their operations, they need confidence that multiple robots can conduct complex tasks repeatedly, reliably, and safely. Realtime Robotics has developed hardware-agnostic software to run and coordinate industrial workcells smoothly without error or collision.

“The lack of coordination on the fly is a key reason why we don’t see multiple robots in many applications today — even in machine tending, where multiple arms could be useful,” said Peter Howard, CEO of Realtime Robotics (RTR). “We’re planning with Mitsubishi Electric to put our motion planner into its CNC controller.”

The company last month received strategic investment from Mitsubishi Electric Corp. as part of its ongoing Series B round. Realtime Robotics said it plans to use the funding to continue scaling and refining its motion-planning optimization and runtime systems. 

Last week, a high-ranking delegation from Mitsubishi Electric visited Realtime Robotics to celebrate the companies’ collaboration. RTR demonstrated a workcell with four robot arms from different vendors, including Mitsubishi, that was able to optimize motion as desired in seconds.

“Mitsubishi Electric is a multi-business conglomerate, a technology leader, and one of the leading suppliers of factory automation products worldwide,” said Dr. Toshie Takeuchi, executive officer and group president for factory automation systems at Mitsubishi. “I see this partnership as the perfect point where experience meets innovation to create value for our customers, stakeholders, and society.”

She and Howard answered the following questions from The Robot Report:

Mitsubishi Electric, Realtime Robotics integrate technologies

How is Realtime Robotics’ motion-optimization software unique? How will it help Mitsubishi Electric’s customers?

Takeuchi: Realtime Robotics’ software is unique in many ways. It starts with the ability to do collision-free motion planning. From there, the motion planning in single robot cells as well as multirobot cells can be automatically optimized for cycle time.

Our customers will benefit by optimizing cycle time to improve production efficiency and reducing the amount of engineering efforts required for equipment design.

Howard: Typically, to provide access for multiple tools at once, you need an interlocked sequence, which loses time. According to the IFR [which recognized the company for its “choreography” tool], up to 70% of the cost of a robot is in programming it.

With RapidPlan, we automatically tune for fixed applications, saving time. Our cloud service can consume files and send back an optimized motion plan, enabling hundreds of thousands of motions in a couple of hours. It’s like Google Maps for industrial robots.

Does Mitsubishi have a timeframe in mind for integrating Realtime’s technology into its controls for factory automation (FA)? When will they be available?

Takeuchi: We are starting by integrating RTR’s motion-planning and optimization technology into our 3D simulator to significantly improve equipment and system design.

Our plan is to incorporate this technology into our FA control systems, including PLCs and CNCs, and this integration is currently under development and testing, with a launch expected soon.

Howard: We’re currently validating and characterizing for remote optimization with customers. We’re also doing longevity testing here at our headquarters.

In the demo cell, you couldn’t easily program 1.7 million options for four different arms, but RapidPlan automates motion planning and calculates space reservations to avoid obstacles in real time. We do point-to-point, integrated spline-based movement.

Toyota asked us for a 16-arm cell to test spot welding, and we can add a second controller for an adjacent cell. We can currently control up to 12 robots for welding high and low on an auto body.

Mitsubishi Electric recently launched the RV-35/50/80 FR industrial robots — are they designed to work with Realtime’s technology?

Takeuchi: Yes, they are. Our robots are developed on the same platform which seamlessly integrate with RTR’s technology.

Howard: For example, Sony uses Mitsubishi robots to manufacture 2-cm parts, and we can get down to submillimeter accuracy if it’s a known object with a CAD file.

Cobots are fine for larger objects and voxels, but users must still conduct safety assessments.

MELCO's Dr. Takeuchi changes optimization parameters during RTR demonstration.

MELCO’s Dr. Takeuchi changes optimization parameters during RTR demonstration by Kevin Carlin, chief commercial officer. Source: Realtime Robotics

RTR optimizes motion for multiple applications

What sorts of applications or use cases do Mitsubishi and Realtime expect to benefit from closer coordination among robots?

Takeuchi: Our interaction with and understanding from customers suggest that almost all manufacturing sites are continuously in need of increasing production, efficiency, profitability, and sustainability.

With our collaboration, we can reduce the robots’ cycle time, hence increasing efficiency. Multi-robot applications can collaborate seamlessly, increasing throughput and optimizing floor space.

By implementing collision-free motion planning, we help our customers reduce the potential for collisions, thereby reducing losses and improving overall performance.

Howard: It’s all about shortening cycle times and avoiding collisions. In Europe, energy efficiency is increasingly a priority, and in Japan, floor space is at a premium, but throughput is still the most important.

Our mission is to make automation simpler to program. For customers like Mitsubishi, Toyota, and Siemens, the hardware has to be industrial-grade, and so does the software. We talk to all the OEMs and have close relationships with the major robot suppliers.

This is ideal for uses cases such as gluing, deburring, welding and assembly. RapidSense can also be helpful in mixed-case palletizing. For mobile manipulation, RTR’s software could plan for the motion of both the AMR [autonomous mobile robot] and the arm.

Members of Realtime Robotics and Mitsubishi Electric's teams celebrate their partnership.

Members of Realtime Robotics and Mitsubishi Electric’s teams celebrate their collaboration. Source: Realtime Robotics

Mitsubishi strengthens partnership

Do you expect that the addition of a member to Realtime Robotics’ board of directors will help it jointly plan future products with Mitsubishi Electric?

Takeuchi: Yes. Since our initial investment in Realtime Robotics, we have both benefited from this partnership. We look forward to integrating the Realtime Robotics technology into our portfolio of products to continue enhancing our next-gen products with advanced features and scalability.

Howard: RTR has been working with Mitsubishi since 2018, so it’s our longest customer and partner. We have other investors, but our relationship with Mitsubishi is more holistic, broader, and deeper.

We’ve seen a lot of Mitsubishi Electric’s team as we create our products, and we look forward to reaching the next steps to market together.

The post Realtime Robotics celebrates motion-planning collaboration with Mitsubishi Electric appeared first on The Robot Report.

]]>
https://www.therobotreport.com/realtime-robotics-celebrates-collaboration-with-mitsubishi-electric/feed/ 0
Waabi raises $200M from Uber, NVIDIA, and others on the road to self-driving trucks https://www.therobotreport.com/waabi-raises-200m-uber-nvidia-on-the-road-self-driving-trucks/ https://www.therobotreport.com/waabi-raises-200m-uber-nvidia-on-the-road-self-driving-trucks/#respond Tue, 18 Jun 2024 12:40:06 +0000 https://www.therobotreport.com/?p=579477 Waabi, which has been developing self-driving trucks using generative AI, plans to put its systems on Texas roads in 2025.

The post Waabi raises $200M from Uber, NVIDIA, and others on the road to self-driving trucks appeared first on The Robot Report.

]]>
The Waabi Driver includes a generative AI stack as well as sensors and compute hardware.

The Waabi Driver includes a generative AI stack as well as sensors and compute hardware. Source: Waabi

Autonomous passenger vehicles have hit potholes over the past few years, with accidents leading to regulatory scrutiny, but investment in self-driving trucks has continued. Waabi today announced that it has raised $200 million in an oversubscribed Series B round. The funding brings total investment in the Toronto-based startup to more than $280 million.

Waabi said that it “is on the verge of Level 4 autonomy” and that it expects to deploy fully autonomous trucks in Texas next year. The company claimed that it has been able to advance quickly toward that goal because of its use of generative artificial intelligence in the physical world.

“I have spent most of my professional life dedicated to inventing new AI technologies that can deliver on the enormous potential of AI in the physical world in a provably safe and scalable way,” stated Raquel Urtasun, a professor at the University of Toronto and founder and CEO of Waabi.

“Over the past three years, alongside the incredible team at Waabi, I have had the chance to turn these breakthroughs into a revolutionary product that has far surpassed my expectations,” she added. “We have everything we need — breakthrough technology, an incredible team, and pioneering partners and investors — to launch fully driverless autonomous trucks in 2025. This is monumental for the industry and truly marks the beginning of the next frontier for AI.”

Waabi uses generative AI to reduce on-road testing

Waabi said it is pioneering generative AI for the physical world, starting with applying the technology to self-driving trucks. The company said it has developed “a single end-to-end AI system that is capable of human-like reasoning, enabling it to generalize to any situation that might happen on the road, including those it has never seen before.”

Because of that ability to generalize, the system requires significantly less training data and compute resources in comparison with approaches to autonomy, asserted Waabi. In addition, the company claimed that its system is fully interpretable and that its safety can be validated and verified.

The company said Copilot4D, its “end-to-end AI system, paired with Waabi World, the world’s most advanced simulator, reduces the need for extensive on-road testing and enables a safer, more efficient solution that is highly performant and scalable from Day 1.”

Several industry observers have pointed out that self-driving trucks will likely arrive on public roads before widespread deployments of robotaxis in the U.S. While Waymo has pumped the brakes on development, other companies have made progress, including Inceptio, FERNRIDE, Kodiak Robotics, and Aurora.

At the same time, work on self-driving cars continues, with Wayve raising $1.05 billion last month and TIER IV obtaining $54 million. General Motors invested another $850 million in Cruise yesterday.

“Self-driving technology is a prime example of how AI can dramatically improve our lives,” said AI luminary Geoff Hinton. “Raquel and Waabi are at the forefront of innovation, developing a revolutionary approach that radically changes the way autonomous systems work and leads to safer and more efficient solutions.”

Waabi plans to expand its commercial operations and grow its team in Canada and the U.S. The company cited recent accomplishments, including the opening of its new Texas AV trucking terminal, a collaboration with NVIDIA to integrate NVIDIA DRIVE Thor into the Waabi Driver, and its ongoing partnership with Uber Freight. It has run autonomous shipments for Fortune 500 companies and top-tier shippers in Texas.

Copilot4D predicts future LiDAR point clouds from a history of past LiDAR observations, akin to how LLMs predict the next word given the preceding text. We design a 3 stage architecture that is able to exploit all the breakthroughs in LLMs to bring the first 4D foundation model.

Copilot4D predicts future lidar point clouds from a history of past observations, similar to how large language models (LLMs) predict the next word given the preceding text. Source: Waabi

Technology leaders invest in self-driving trucks

Waabi noted that top AI, automotive, and logistics enterprises were among its investors. Uber and Khosla Ventures led Waabi’s Series B round. Other participants included NVIDIA, Volvo Group Venture Capital, Porsche Automobil Holding, Scania Invest, and Ingka Investments.

“Waabi is developing autonomous trucking by applying cutting-edge generative AI to the physical world,” said Jensen Huang, founder and CEO of NVIDIA. “I’m excited to support Raquel’s vision through our investment in Waabi, which is powered by NVIDIA technology. I have championed Raquel’s pioneering work in AI for more than a decade. Her tenacity to solve the impossible is an inspiration.”

Additional support came from HarbourVest Partners, G2 Venture Partners, BDC Capital’s Thrive Venture Fund, Export Development Canada, Radical Ventures, Incharge Capital, and others.

“We are big believers in the potential for autonomous technology to revolutionize transportation, making a safer and more sustainable future possible,” added Dara Khosrowshahi, CEO of Uber. “Raquel is a visionary in the field, and under her leadership, Waabi’s AI-first approach provides a solution that is extremely exciting in both its scalability and capital efficiency.”

Vinod Khosla, founder of Khosla Ventures, said: “Change never comes from incumbents but from the innovation of entrepreneurs that challenge the status quo. Raquel and her team at Waabi have done exactly that with their products and business execution. We backed Waabi very early on with the bet that generative AI would transform transportation and are thrilled to continue on this journey with them as they move towards commercialization.”

The post Waabi raises $200M from Uber, NVIDIA, and others on the road to self-driving trucks appeared first on The Robot Report.

]]>
https://www.therobotreport.com/waabi-raises-200m-uber-nvidia-on-the-road-self-driving-trucks/feed/ 0
At CVPR, NVIDIA offers Omniverse microservices, shows advances in visual generative AI https://www.therobotreport.com/nvidia-offers-omniverse-microservices-advances-visual-generative-ai-cvpr/ https://www.therobotreport.com/nvidia-offers-omniverse-microservices-advances-visual-generative-ai-cvpr/#respond Mon, 17 Jun 2024 13:00:07 +0000 https://www.therobotreport.com/?p=579457 Omniverse Cloud Sensor RTX can generate synthetic data for robotics, says NVIDIA, which is presenting over 50 research papers at CVPR.

The post At CVPR, NVIDIA offers Omniverse microservices, shows advances in visual generative AI appeared first on The Robot Report.

]]>
NVIDIA Omniverse Cloud Sensor RTX Generates Synthetic Data to Speed AI Development of Autonomous Vehicles, Robotic Arms, Mobile Robots, Humanoids and Smart Spaces

As shown at CVPR, Omniverse Cloud Sensor RTX microservices generate high-fidelity sensor simulation from
an autonomous vehicle (left) and an autonomous mobile robot (right). Sources: NVIDIA, Fraunhofer IML (right)

NVIDIA Corp. today announced NVIDIA Omniverse Cloud Sensor RTX, a set of microservices that enable physically accurate sensor simulation to accelerate the development of all kinds of autonomous machines.

NVIDIA researchers are also presenting 50 research projects around visual generative AI at the Computer Vision and Pattern Recognition, or CVPR, conference this week in Seattle. They include new techniques to create and interpret images, videos, and 3D environments. In addition, the company said it has created its largest indoor synthetic dataset with Omniverse for CVPR’s AI City Challenge.

Sensors provide industrial manipulators, mobile robots, autonomous vehicles, humanoids, and smart spaces with the data they need to comprehend the physical world and make informed decisions.

NVIDIA said developers can use Omniverse Cloud Sensor RTX to test sensor perception and associated AI software in physically accurate, realistic virtual environments before real-world deployment. This can enhance safety while saving time and costs, it said.

“Developing safe and reliable autonomous machines powered by generative physical AI requires training and testing in physically based virtual worlds,” stated Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA. “Omniverse Cloud Sensor RTX microservices will enable developers to easily build large-scale digital twins of factories, cities and even Earth — helping accelerate the next wave of AI.”

Omniverse Cloud Sensor RTX supports simulation at scale

Built on the OpenUSD framework and powered by NVIDIA RTX ray-tracing and neural-rendering technologies, Omniverse Cloud Sensor RTX combines real-world data from videos, cameras, radar, and lidar with synthetic data.

Omniverse Cloud Sensor RTX includes software application programming interfaces (APIs) to accelerate the development of autonomous machines for any industry, NVIDIA said.

Even for scenarios with limited real-world data, the microservices can simulate a broad range of activities, claimed the company. It cited examples such as whether a robotic arm is operating correctly, an airport luggage carousel is functional, a tree branch is blocking a roadway, a factory conveyor belt is in motion, or a robot or person is nearby.

Microservice to be available for AV development 

CARLA, Foretellix, and MathWorks are among the first software developers with access to Omniverse Cloud Sensor RTX for autonomous vehicles (AVs). The microservices will also enable sensor makers to validate and integrate digital twins of their systems in virtual environments, reducing the time needed for physical prototyping, said NVIDIA.

Omniverse Cloud Sensor RTX will be generally available later this year. NVIDIA noted that its announcement coincided with its first-place win at the Autonomous Grand Challenge for End-to-End Driving at Scale at CVPR.

The NVIDIA researchers’ winning workflow can be replicated in high-fidelity simulated environments with Omniverse Cloud Sensor RTX. Developers can use it to test self-driving scenarios in physically accurate environments before deploying AVs in the real world, said the company.

Two of NVIDIA’s papers — one on the training dynamics of diffusion models and another on high-definition maps for autonomous vehicles — are finalists for the Best Paper Awards at CVPR.

The company also said its win for the End-to-End Driving at Scale track demonstrates its use of generative AI for comprehensive self-driving models. The winning submission outperformed more than 450 entries worldwide and received CVPR’s Innovation Award.

Collectively, the work introduces artificial intelligence models that could accelerate the training of robots for manufacturing, enable artists to more quickly realize their visions, and help healthcare workers process radiology reports.

“Artificial intelligence — and generative AI in particular — represents a pivotal technological advancement,” said Jan Kautz, vice president of learning and perception research at NVIDIA. “At CVPR, NVIDIA Research is sharing how we’re pushing the boundaries of what’s possible — from powerful image-generation models that could supercharge professional creators to autonomous driving software that could help enable next-generation self-driving cars.”

Foundation model eases object pose estimation

NVIDIA researchers at CVPR are also presenting FoundationPose, a foundation model for object pose estimation and tracking that can be instantly applied to new objects during inference, without the need for fine tuning. The model uses either a small set of reference images or a 3D representation of an object to understand its shape. It set a new record on a benchmark for object pose estimation.

FoundationPose can then identify and track how that object moves and rotates in 3D across a video, even in poor lighting conditions or complex scenes with visual obstructions, explained NVIDIA.

Industrial robots could use FoundationPose to identify and track the objects they interact with. Augmented reality (AR) applications could also use it with AI to overlay visuals on a live scene.

NeRFDeformer transforms data from a single image

NVIDIA’s research includes a text-to-image model that can be customized to depict a specific object or character, a new model for object-pose estimation, a technique to edit neural radiance fields (NeRFs), and a visual language model that can understand memes. Additional papers introduce domain-specific innovations for industries including automotive, healthcare, and robotics.

A NeRF is an AI model that can render a 3D scene based on a series of 2D images taken from different positions in the environment. In robotics, NeRFs can generate immersive 3D renders of complex real-world scenes, such as a cluttered room or a construction site.

However, to make any changes, developers would need to manually define how the scene has transformed — or remake the NeRF entirely.

Researchers from the University of Illinois Urbana-Champaign and NVIDIA have simplified the process with NeRFDeformer. The method can transform an existing NeRF using a single RGB-D image, which is a combination of a normal photo and a depth map that captures how far each object in a scene is from the camera.

NVIDIA researchers have simplified the process of generating a 3D scene from 2D images using NeRFs.

Researchers have simplified the process of generating a 3D scene from 2D images using NeRFs. Source: NVIDIA

JeDi model shows how to simplify image creation at CVPR

Creators typically use diffusion models to generate specific images based on text prompts. Prior research focused on the user training a model on a custom dataset, but the fine-tuning process can be time-consuming and inaccessible to general users, said NVIDIA.

JeDi, a paper by researchers from Johns Hopkins University, Toyota Technological Institute at Chicago, and NVIDIA, proposes a new technique that allows users to personalize the output of a diffusion model within a couple of seconds using reference images. The team found that the model outperforms existing methods.

NVIDIA added that JeDi can be combined with retrieval-augmented generation, or RAG, to generate visuals specific to a database, such as a brand’s product catalog.

JeDi is a new technique that allows users to easily personalize the output of a diffusion model within a couple of seconds using reference images, like an astronaut cat that can be placed in different environments.

JeDi is a new technique that allows users to easily personalize the output of a diffusion model within a couple of seconds using reference images, like an astronaut cat that can be placed in different environments. Source: NVIDIA

Visual language model helps AI get the picture

NVIDIA said it has collaborated with the Massachusetts Institute of Technology (MIT) to advance the state of the art for vision language models, which are generative AI models that can process videos, images, and text. The partners developed VILA, a family of open-source visual language models that they said outperforms prior neural networks on benchmarks that test how well AI models answer questions about images.

VILA’s pretraining process provided enhanced world knowledge, stronger in-context learning, and the ability to reason across multiple images, claimed the MIT and NVIDIA team.

The VILA model family can be optimized for inference using the NVIDIA TensorRT-LLM open-source library and can be deployed on NVIDIA GPUs in data centers, workstations, and edge devices.

As shown at CVPR, VILA can understand memes and reason based on multiple images or video frames.

VILA can understand memes and reason based on multiple images or video frames. Source: NVIDIA

Generative AI drives AV, smart city research at CVPR

NVIDIA Research has hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars, and robotics. A dozen of the NVIDIA-authored CVPR papers focus on autonomous vehicle research.

Producing and Leveraging Online Map Uncertainty in Trajectory Prediction,” a paper authored by researchers from the University of Toronto and NVIDIA, has been selected as one of 24 finalists for CVPR’s best paper award.

In addition, Sanja Fidler, vice president of AI research at NVIDIA, will present on vision language models at the Workshop on Autonomous Driving today.

NVIDIA has contributed to the CVPR AI City Challenge for the eighth consecutive year to help advance research and development for smart cities and industrial automation. The challenge’s datasets were generated using NVIDIA Omniverse, a platform of APIs, software development kits (SDKs), and services for building applications and workflows based on Universal Scene Description (OpenUSD).

AI City Challenge synthetic datasets span multiple environments generated by NVIDIA Omniverse, allowing hundreds of teams to test AI models in physical settings such as retail and warehouse environments to enhance operational efficiency.

AI City Challenge synthetic datasets span multiple environments generated by NVIDIA Omniverse, allowing hundreds of teams to test AI models in physical settings such as retail and warehouse environments to enhance operational efficiency. Source: NVIDIA

Isha Salian headshot.About the author

Isha Salian writes about deep learning, science and healthcare, among other topics, as part of NVIDIA’s corporate communications team. She first joined the company as an intern in summer 2015. Isha has a journalism M.A., as well as undergraduate degrees in communication and English, from Stanford.

The post At CVPR, NVIDIA offers Omniverse microservices, shows advances in visual generative AI appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-offers-omniverse-microservices-advances-visual-generative-ai-cvpr/feed/ 0
RBR50 Spotlight: Opteran Mind reverse-engineers natural brain algorithms for mobile robot autonomy https://www.therobotreport.com/rbr50-spotlight-opteran-mind-reverse-engineers-brain-algorithms-mobile-robot-autonomy/ https://www.therobotreport.com/rbr50-spotlight-opteran-mind-reverse-engineers-brain-algorithms-mobile-robot-autonomy/#respond Tue, 11 Jun 2024 14:28:47 +0000 https://www.therobotreport.com/?p=579430 Opteran commercialized its vision-based approach to autonomy by releasing Opteran Mind.

The post RBR50 Spotlight: Opteran Mind reverse-engineers natural brain algorithms for mobile robot autonomy appeared first on The Robot Report.

]]>
RBR50 banner with the Opteran solution.


Organization: Opteran
Country: U.K.
Website: https://opteran.com
Year Founded: 2019
Number of Employees: 11-50
Innovation Class: Technology


Current approaches to machine autonomy require a lot of sensor data and expensive compute and often still fail when exposed to the dynamic nature of the real world, according to Opteran. The company earned RBR50 recognition in 2021 for its lightweight Opteran Development kit, which took inspiration from research into insect intelligence.

rbr50 banner logo.

In December 2023, Opteran commercialized its vision-based approach to autonomy by releasing Opteran Mind. The company, which has a presence in the U.K., Japan, and the U.S., announced that its new algorithms don’t require training, extensive infrastructure, or connectivity for perception and navigation.

This is an alternative to other AI and simultaneous localization and mapping (SLAM), which are based on decades-old models of the human visual cortex, said James Marshall, a professor at the University of Sheffield and chief scientific officer at Opteran. Animal brains evolved to solve for motion first, not points in space, he noted.

Instead, Opteran Mind is a software product that can run with low-cost, 2D CMOS cameras and on low-power compute for non-deterministic path planning. OEMs and systems integrators can build bespoke systems on the reference hardware for mobile robots, aerial drones, and other devices.

“We provide localization, mapping, and collision prediction from robust panoramic, stabilized 3D CMOS camera input,” explained Marshall.

At a recent live demonstration at MassRobotics in Boston, the company showed how a simple autonomous mobile robot (AMR) using Opteran Mind 4.1 could navigate and avoid obstacles in a mirrored course that would normally be difficult for other technologies.

It is currently focusing on automated guided vehicles (AGVs), AMRs, and drones for warehousing, inspection, and maintenance.

“We have the only solution that provides robust localization in challenging environments with scene changes, aliasing, and highly dynamic light using the lowest-cost cameras and compute,” it said.

The company is currently working toward safety certifications and “decision engines,” according to Marshall.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Explore the RBR50 Robotics Innovation Awards 2024.


RBR50 Robotics Innovation Awards 2024

OrganizationInnovation
ABB RoboticsModular industrial robot arms offer flexibility
Advanced Construction RoboticsIronBOT makes rebar installation faster, safer
Agility RoboticsDigit humanoid gets feet wet with logistics work
Amazon RoboticsAmazon strengthens portfolio with heavy-duty AGV
Ambi RoboticsAmbiSort uses real-world data to improve picking
ApptronikApollo humanoid features bespoke linear actuators
Boston DynamicsAtlas shows off unique skills for humanoid
BrightpickAutopicker applies mobile manipulation, AI to warehouses
Capra RoboticsHircus AMR bridges gap between indoor, outdoor logistics
DexterityDexterity stacks robotics and AI for truck loading
DisneyDisney brings beloved characters to life through robotics
DoosanApp-like Dart-Suite eases cobot programming
Electric SheepVertical integration positions landscaping startup for success
ExotecSkypod ASRS scales to serve automotive supplier
FANUCFANUC ships one-millionth industrial robot
FigureStartup builds working humanoid within one year
Fraunhofer Institute for Material Flow and LogisticsevoBot features unique mobile manipulator design
Gardarika TresDevelops de-mining robot for Ukraine
Geek+Upgrades PopPick goods-to-person system
GlidanceProvides independence to visually impaired individuals
Harvard UniversityExoskeleton improves walking for people with Parkinson’s disease
ifm efectorObstacle Detection System simplifies mobile robot development
igusReBeL cobot gets low-cost, human-like hand
InstockInstock turns fulfillment processes upside down with ASRS
Kodama SystemsStartup uses robotics to prevent wildfires
Kodiak RoboticsAutonomous pickup truck to enhance U.S. military operations
KUKARobotic arm leader doubles down on mobile robots for logistics
Locus RoboticsMobile robot leader surpasses 2 billion picks
MassRobotics AcceleratorEquity-free accelerator positions startups for success
MecademicMCS500 SCARA robot accelerates micro-automation
MITRobotic ventricle advances understanding of heart disease
MujinTruckBot accelerates automated truck unloading
MushinyIntelligent 3D sorter ramps up throughput, flexibility
NASAMOXIE completes historic oxygen-making mission on Mars
Neya SystemsDevelopment of cybersecurity standards harden AGVs
NVIDIANova Carter gives mobile robots all-around sight
Olive RoboticsEdgeROS eases robotics development process
OpenAILLMs enable embedded AI to flourish
OpteranApplies insect intelligence to mobile robot navigation
Renovate RoboticsRufus robot automates installation of roof shingles
RobelAutomates railway repairs to overcome labor shortage
Robust AICarter AMR joins DHL's impressive robotics portfolio
Rockwell AutomationAdds OTTO Motors mobile robots to manufacturing lineup
SereactPickGPT harnesses power of generative AI for robotics
Simbe RoboticsScales inventory robotics deal with BJ’s Wholesale Club
Slip RoboticsSimplifies trailer loading/unloading with heavy-duty AMR
SymboticWalmart-backed company rides wave of logistics automation demand
Toyota Research InstituteBuilds large behavior models for fast robot teaching
ULC TechnologiesCable Splicing Machine improve safety, power grid reliability
Universal RobotsCobot leader strengthens lineup with UR30

The post RBR50 Spotlight: Opteran Mind reverse-engineers natural brain algorithms for mobile robot autonomy appeared first on The Robot Report.

]]>
https://www.therobotreport.com/rbr50-spotlight-opteran-mind-reverse-engineers-brain-algorithms-mobile-robot-autonomy/feed/ 0
Unleashing potential: The role of software development in advancing robotics https://www.therobotreport.com/unleashing-potential-software-development-role-advancing-robotics/ https://www.therobotreport.com/unleashing-potential-software-development-role-advancing-robotics/#respond Sun, 09 Jun 2024 15:15:09 +0000 https://www.therobotreport.com/?p=579358 As robotics serves more use cases across industries, hardware and software development should be parallel efforts, says Radixweb.

The post Unleashing potential: The role of software development in advancing robotics appeared first on The Robot Report.

]]>
A robotics strategy should consider software development in parallel, says Radixweb.

A robotics strategy should consider software development in parallel, says Radixweb. Source: Adobe Stock

In today’s fast-tech era, robotics engineering is transforming multiple industrial sectors. From cartesian robots to robotaxis, cutting-edge technologies are automating applications in logistics, healthcare, finance, and manufacturing. Moreover, automation uses modern software to execute multiple tasks or even one specific task with minimal human interference. Hence, software development is a critical player in building these robots.

The growing technology stack in robotics is one reason the software development market is expected to reach a whopping valuation of $1 billion by 2027. The industry involves designing, building, and maintaining software using complex algorithms, machine learning, and artificial intelligence to make operations more efficient and enable autonomous decision making.

Integrating robotics and software development

With the evolution of robotics, this subset of software engineering offers a new era of opportunities. Developers are now working on intelligent machines that can execute multiple tasks with minimal human intervention. Also, new software frameworks power these systems that are designed for them.

From perception and navigation to object recognition and manipulation, as well as higher-level tasks such as fleet management and human-machine interaction, reliable and explainable software is essential to commercially successful systems.

One of the essential functions software engineering is the building and testing of robotics applications. Hence, developers need to simulate real-world scenarios and accumulate insights for testing goals. The goal is to recognize and rectify bugs before implementing apps in a real environment.

In addition, developers should remember that they are building systems to minimize human effort, not just improve industrial efficiency. Their efforts are not just for the sake of novel technologies but to provide economic and social benefits.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Software developers can advance robotics

Integrating software and robotics promises a symbiotic partnership between the two domains. Apart from collaborating on cutting-edge systems, coordinated development efforts enable the following benefits:

  1. Consistency — Robots can be programmed to execute commands with consistency, eradicating human errors caused by distractions or fatigue.
  2. Precision — Advanced algorithms also allow robots to enhancing overall product quality.
  3. Increased speed — Software-driven robots can carry out tasks much faster than human beings, saving time and money in production activities.
  4. Motion planning — Along with modern motors, motion control software allows robots to navigate through complex environments while avoiding potential injuries or collisions.
  5. Minimal risk — Advanced robots can handle tasks that involve high physical risks, extreme temperatures, or exposure to toxic materials, ensuring employees’ safety.
  6. Remote operations — Building advanced software systems for robots enables them to be monitored and controlled remotely, minimizing the need for human workers to be always present in hazardous settings.
  7. AI and machine learning — The integration of AI can help robots understand, learn, adapt, and make independent decisions based on the data collected.
  8. Real-time data analysis — As stationary and mobile platforms, robots can gather large amounts of data during their operations. With the right software, this data can easily be examined in real time to determine areas for improvement.
  9. Scalability — Robot users can use software to scale robot fleets up or down in response to ever-changing business demands, providing operational flexibility.
  10. Reduced downtime — With predictive maintenance software, robots can reliably function for a long time.
  11. Decreased labor costs — Robotics minimizes the requirement for manual labor, reducing the cost of hiring human resources and emphasizing more complex activities that need creativity and critical thinking.

Best practices for integrating software and robots

To fully leverage the benefits of software development for robotics, businesses must adopt effective strategies. Here are a few tailored practice to consider:

  • Design an intuitive user interface for managing and configuring automated processes.
  • Integrate real-time monitoring and reporting functionalities to track the progress of your tasks.
  • Adopt continuous integration practices to integrate code modifications and ensure system durability constantly.
  • Adhere to applicable data-privacy and cybersecurity protocols to maintain client trust.
  • Analyze existing workflows to detect any vulnerabilities and areas for improvement.
  • Use error-handling techniques to handle any unforeseen scenarios.
  • Implement automated testing frameworks to encourage efficient testing.
  • Provide suitable access controls to protect these systems from unauthorized access.
  • Identify the applications that can be automated for a particular market.
  • Break down complicated tasks into teeny-tiny, manageable steps.
  • Perform extensive testing to recognize and rectify any issues or errors.

As robotics finds new use cases, software must evolve so the hardware can satisfy the needs of more industries. For Industry 4.0, software developers are partnering with hardware and service providers to build systems that are easier to build, use, repurpose, and monitor.

Innovative combinations of software  and robotics can result in new levels of autonomy and open new opportunities.

Sarrah Pitaliya, RadixwebAbout the author

Sarrah Pitaliya is vice president of marketing at Radixweb, With a strong hold on market research and end-to-end digital branding strategies, she leads a team focused on corporate rebranding, user experience marketing, and demand generation.

Radixweb is a software development company with offices in the U.S. and India. This entry is reposted with permission.

The post Unleashing potential: The role of software development in advancing robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/unleashing-potential-software-development-role-advancing-robotics/feed/ 0
Investor Dean Drako acquires Cobalt Robotics https://www.therobotreport.com/investor-dean-drako-acquires-cobalt-robotics/ https://www.therobotreport.com/investor-dean-drako-acquires-cobalt-robotics/#respond Wed, 05 Jun 2024 17:06:35 +0000 https://www.therobotreport.com/?p=579305 Cobalt AI is set to expand the use of its human-verified AI technology in various enterprise security applications.

The post Investor Dean Drako acquires Cobalt Robotics appeared first on The Robot Report.

]]>
cobalt robot in a building hallway.

The Cobalt mobile robot features autonomous driving technology, allowing it to navigate through various terrains and obstacles with ease, ensuring constant vigilance without human operation. | Credit: Cobalt robotics

Cobalt Robotics has been acquired by investor Dean Drako, and the name of the firm has been changed to Cobalt AI. Financial terms of the acquisition were not disclosed. The name change was made to more accurately represent the future direction of the company and the products it offers.

Drako is the founder and CEO of Eagle Eye Networks, in addition to a number of other enterprises and side projects. Cobalt AI fits closest to the Eagle Eye Smart Video Surveillance portfolio of solutions.

There are no major changes to Cobalt’s leadership other than Drako serving as the company’s chairman. Ken Wolff, Cobalt’s current CEO, will continue leading the company. The company will also continue to operate as an independent company with its current management team and entire staff.

Cobalt started with mobile robotics

Cobalt Robotics was founded in 2016 as a developer of autonomous mobile robots (AMRs) for security applications. The AMRs were designed to patrol the interior of a facility while actively surveilling activities and remotely monitoring the facilities as an extension of the building’s security.

To meet the growing needs of its corporate customers, Cobalt developed AI-based algorithms for alarm filtering, remote monitoring, sensing, and other autonomous data-gathering functions. In addition to the sensors onboard the Cobalt AMR, the Cobalt Monitoring Intelligence and Cobalt Command Center gather data from a broad range of cameras, access control systems, robots, and other edge devices.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


“The company monitoring and command center technology is a catalyst for a new era of security,” said Drako. “They have created field-proven AI to make security and guarding tremendously more effective and efficient. Furthermore, Cobalt’s open platform strategy, which integrates with a plethora of video and access systems, is aligned with the open product strategy I believe in.”

Drako’s vision for sensor-monitoring AI

In a recent LinkedIn post, Drako explained why he made the deal.

“I did an extensive search, with a goal to acquire the company with the most powerful AI-based enterprise security automation technology in our physical security industry. Cobalt’s AI technologies, including their monitoring and command center solutions, are years ahead — they will be one of the catalysts for a new era of security.

“Importantly, Cobalt’s open platform strategy, which integrates with a wide range of video and access control systems, aligns with the open product strategy I strongly believe in.

“I am working closely with Cobalt AI’s leadership team, as well as infusing significant capital, to quickly scale their ‘human verified AI’ technology across enterprise security applications.”

marketecture diagram of the cobalt ai features.

Cobalt AI is marketing “Human verified AI” to promote human-in-the loop methods of leveraging AI and human-based perception to monitor and interpret security information. | Credit: Dean Drako

“We are thrilled that Dean Drako has acquired Cobalt and will serve as chairman. Dean has invested capital and strategic insights to grow other physical security companies to unicorns and technology leaders in their space,” said Wolff. “We share a mutual vision of the tremendous advantages of automation through AI with human verification.  Drako’s acquisition validates our strategy to improve monitoring, response times and lower costs and also gives us the capital to deliver for our enterprise clients.”

The post Investor Dean Drako acquires Cobalt Robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/investor-dean-drako-acquires-cobalt-robotics/feed/ 0
RGo Robotics integrates NVIDIA Isaac technology into its perception platforms https://www.therobotreport.com/rgo-robotics-integrates-nvidia-isaac-robotics-technology-into-its-platforms/ https://www.therobotreport.com/rgo-robotics-integrates-nvidia-isaac-robotics-technology-into-its-platforms/#respond Wed, 05 Jun 2024 12:00:23 +0000 https://www.therobotreport.com/?p=579294 With RGo Robotics' perception system and NVIDIA Isaac Perceptor, the companies say customers can deploy mobile robots within a few months.

The post RGo Robotics integrates NVIDIA Isaac technology into its perception platforms appeared first on The Robot Report.

]]>
RGo Robotics and NVIDIA.

A mobile robot based on NVIDIA Isaac Perceptor libraries and RGo perception can quickly be set up in new facilities, operate reliably in any environment, and automate tasks both indoors and outdoors, say the companies. | Source: RGo Robotics

RGo Robotics Inc. this week announced it will integrate NVIDIA’s Isaac Robotics technology into its perception platforms. The company said the integration will “help advance AI-powered automation.”

RGo said its Perception Engine is an artificial intelligence and vision system for localization, obstacle detection, and scene understanding. By combining it with NVIDIA Isaac Perceptor acceleration libraries, the partners said they will enable customers to deploy mobile robots within a few months.

The integrated software stack is compatible with the new NVIDIA Nova Orin Developer kit and is intended to ease setup for robots in changing environments, both indoors and outdoors. 

“The RGo Perception Engine running on NVIDIA Jetson Orin modules is already deployed in dynamic and complex warehousing and manufacturing environments, helping enable intelligent automation in places not possible before,” stated Amir Bousani, co-founder and CEO of RGo Robotics.

“The expanded integration and availability of the RGo Perception Engine with NVIDIA Isaac Perceptor will help enable many more customers to deploy more intelligent mobile machines that can operate reliably in any environment,” he asserted. “Visual perception is an enabler for the generative AI revolution in robotics.”

RGo Robotics said it creates technology to enable machines to perceive their surroundings. The company, which has offices in Caesarea, Israel, and Cambridge, Mass., said its software helps robots learn on the go using computer vision, AI algorithms, and scalable sensor-fusion technology. 


SITE AD for the 2024 RoboBusiness registration now open.Register now.


RGO Robotics updates, integrates flagship software

Perception Engine is the flagship product of RGo Robotics. The company claimed that engineers can use its modular AI stack can be used to enable autonomous driving of robots, remote operations, tracking of assets and inventory, geofencing, and dynamic shared mapping. 

The integration of Isaac Perceptor with Perception Engine will provide advanced vision capabilities to AMRs, said RGo and NVIDIA. They said this integrated software stack is compatible with NVIDIA’s recently released Nova Orin Developer Kit. 

“The era of robots powered by physical AI is here,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “RGo’s solutions for AMRs, accelerated by NVIDIA Isaac, will let customers across industries deploy mobile robots that can better perceive, understand, and interact with the world around them.”

The companies said their combined offering will allow customers to use the unique data generated by the platform. This data includes positioning, dynamic mapping, and geometric and semantic understanding. This allows customers to enhance intelligent robots with generative AI-powered autonomy, natural language human-machine interaction, and advanced analysis and insights. 

RGo Robotics plans to work with NVIDIA Isaac software tools and packages, as well as NVIDIA Metropolis vision AI developer tools to build, deploy, and scale vision AI and generative AI. With these, it can identify obstacles, including very challenging ones like forks on the floor, as well as other features that are unique to specific customers.

In addition, RGo said it will be able to use Isaac to offer advanced capabilities such as the generation of facility maps using just a camera.

In his keynote at Computex this week, NVIDIA CEO Jensen Huang noted that an AMR using NVIDIA Isaac Perceptor and RGo Perception Engine was set up to run in an actual production environment within three days.

The post RGo Robotics integrates NVIDIA Isaac technology into its perception platforms appeared first on The Robot Report.

]]>
https://www.therobotreport.com/rgo-robotics-integrates-nvidia-isaac-robotics-technology-into-its-platforms/feed/ 0
ABB releases OmniCore platform for control across its robotics line https://www.therobotreport.com/abb-releases-omnicore-platform-control-across-robotics-line/ https://www.therobotreport.com/abb-releases-omnicore-platform-control-across-robotics-line/#respond Tue, 04 Jun 2024 06:00:24 +0000 https://www.therobotreport.com/?p=579295 OmniCore now provides a unified control architecture for ABB's range of robotics hardware and software after a $170M investment.

The post ABB releases OmniCore platform for control across its robotics line appeared first on The Robot Report.

]]>
ABB OmniCore controls a V400XT large robot with Robot Studio.

Operators use OmniCore V400XT to control a large robot with Robot Studio. Source: ABB Robotics

Thanks to advances in cloud computing, perception technology, and artificial intelligence, industrial and other robots are becoming smarter and more capable. ABB Robotics today launched its next-generation OmniCore platform, which can now control most of its automation line.

“For our customers, automation is a strategic requirement as they seek greater flexibility, simplicity, and efficiency in response to the global megatrends of labor shortages, uncertainty, and the need to operate more sustainably,” said Sami Atiya, president of ABB’s Robotics & Discrete Automation Business Area. “Through our development of advanced mechatronics, AI, and vision systems, our robots are more accessible, more capable, more flexible, and more mobile than ever.”

“But increasingly, they must also work seamlessly together, with us, and each other to take on more tasks in more places,” he added. “This is why we are launching OmniCore, a new milestone in our 50-year history in robotics; a unique, single control architecture – one platform, and one language that integrates our complete range of leading hardware and software.”

Three out of four European companies struggle to find workers for jobs such as welding and fulfillment, noted Atiya. He added that more than 2.1 million U.S. manufacturing jobs will be unfilled by 2030, and businesses need supply chain resilience. In response, Atiya said, OmniCore will provide greater simplicity and flexibility to ABB’s customers.

ABB Robotics, which has offices in Zurich; Vasteras, Sweden; and Auburn Hills, Mich., noted that OmniCore is the product of more than $170 million in investment. The unit of ABB Group called it “a step change to a modular and futureproof control architecture that will enable the full integration of AI, sensor, cloud, and edge computing systems to create the most advanced and autonomous robotic applications.”

While ABB has offered OmniCore since 2018, its plan was always to make it its unified control platform, explained Marc Segura, division president of ABB Robotics. “Now we are in our pivotal moment where we are launching it to the cover almost our entire robotics portfolio,” he told The Robot Report.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


OmniCore offers speed and accuracy

ABB Robotics said OmniCore delivers robot path accuracy at a level of less than 0.6 mm, and it can control the motion of multiple robots running at speeds of up to 1,600 mm per second (3.5 mph). This builds on ABB’s experience with automotive manufacturing. It also opens opportunities for precision automation in areas such as arc welding, assembly of mobile phone displays, gluing, and laser cutting.

“Our automotive customers are extremely competent and helped push the boundaries of what is possible,” Segura said. “OmniCore also complies with and exceeds the most stringent cybersecurity standard and is future-proof for AI and digitalization.”

He claimed that the updated platform enables its robots to operate up to 25% faster and to consume up to 20% less energy compared with its previous controller. It is open to peripherals including sensors, as well as external devices such as dispensers or welding tools, for numerous processes. It also supports up to 100 safety configurations.

Platform covers hardware, software ecosystem

OmniCore is built on a scalable, modular control architecture that offers a wide array of functions, making it suitable for new industries embracing automation, such as biotechnology and construction, said ABB. It also includes more than 1,000 hardware and software features to help customers design, operate, maintain, and optimize operations.

OmniCore is the top level of a software stack that includes the RobotWare operating system and Robot Studio for simulation and design, said Segura. He cited software features such as OptiFact for managing data, Absolute Accuracy, and PickMaster Twin, as well as hardware options spanning from external axis and vision systems to fieldbuses.

“The OmniCore difference is its ability to manage motion, sensors, and application equipment in a single holistic unified system,” he said. “Our new, next-generation platform is more than a controller. It is the backbone of value creation, which includes a complete, integrated software ecosystem.”

“For example, OmniCore enables automotive manufacturers to increase production speed, offering tremendous competitive advantage, increasing press-tending production from 12 to 15 strokes per minute to produce 900 parts per hour,” Segura said. “Some of these applications are now available even as pre-integrated configurations, enabling our systems integrators to reduce commissioning times even further.”

“Software and AI are paramount for us at ABB,” Atiya said. “We have more than 100 projects ongoing to bring AI into our products and for our own productivity.”

He noted that AI enables inspection of welds 20 times faster than with humans, and up to 1,400 picks per hour with its robots. Atiya predicted that generative AI such as ChatGPT will broaden accessibility of robotics.

OmniCore offers seven benefits for robotics deployment, says ABB.

ABB says OmniCore offers seven benefits for robotics deployment and management. Source: ABB Robotics

ABB plans for compatibility across its robots

ABB said its history of robotics innovation began with “the world’s first microprocessor-controlled robot” in 1974. It launched the RobotStudio software in 1998 and acquired Sevensense in 2024 to bring industry-leading AI-based navigation technology to its autonomous mobile robots (AMRs) purchased with ASTI in 2021.

OmniCore replaces ABB Robotics’ IRC5 controller, which will be phased out in June 2026. The company plans to continue to support its customers with spare parts and services through the remaining lifetime of robots using it. Is new hardware needed to upgrade?

Existing users need only to make some minimal re-engineering for connectivity, wiring, and the customized user interface on the FlexPendant, replied Segura. No additional equipment or training is needed, but online and in-person training are available.

“We are still compliant with all the sensors used on IRC5 and have added more opportunities on the OmniCore platform,” Segura said. 

In addition to managing motion, sensors, and application equipment, OmniCore will be able to manage ABB’s collaborative robots, acknowledged Segura. “We also plan to run all our AMRs and mobile manipulators to run on OmniCore in the near future,” he said. 

After the “Fanta challenge” in 2009, which showed three robots working together, ABB demonstrated three robot arms moving around with champagne glasses to show off OmniCore’s precise motion control for production and safety purposes.

OmniCore is now available, and ABB is taking orders. The company is hosting a virtual conference for the new OmniCore platform at 10:00 CEST (4:00 a.m. EDT) on June 4, 2024. It will be available to those who register after the launch event.

The post ABB releases OmniCore platform for control across its robotics line appeared first on The Robot Report.

]]>
https://www.therobotreport.com/abb-releases-omnicore-platform-control-across-robotics-line/feed/ 0
Adapta Robotics execs explain development strategies for testing and inventory robots https://www.therobotreport.com/adapta-robotics-execs-explain-development-strategies-for-testing-and-inventory-robots/ https://www.therobotreport.com/adapta-robotics-execs-explain-development-strategies-for-testing-and-inventory-robots/#respond Mon, 03 Jun 2024 17:46:59 +0000 https://www.therobotreport.com/?p=579282 Adapta Robotics grew out of a university competition team, and the Romanian startup identified electronics and retail as markets.

The post Adapta Robotics execs explain development strategies for testing and inventory robots appeared first on The Robot Report.

]]>
Adapta has developed robots for specific use cases.

Adapta has developed robots for specific use cases. Source: Adapta Robotics

Starting a robotics company is always challenging, as inventors and entrepreneurs scramble to get access to capital and tap the local talent pool. However, identifying the right applications and markets is a good place to begin, according to the co-founders of Adapta Robotics & Engineering SRL. 

The Bucharest, Romania-based company said it specializes in addressing challenges in settings where traditional automation has fallen short and in use cases that have been overlooked. Adapta’s model includes a one-time robot purchase fee, plus annual license and maintenance fees.

In 2017, the company’s founders developed the first prototype of MATT, a delta robot for device testing, at rinf.tech with European Union support. In 2021, they branded Adapta and developed the Effective Retail Intelligent Scanner, or ERIS, for scanning items on store shelves. In 2022, Adapta became an independent brand.

Mihai Craciunescu, co-founder and CEO, and Diana Baicu, co-founder and lead robotics engineer of Adapta Robotics, spoke with The Robot Report about the company’s approach to designing and customizing robots for applications that were previously difficult to automate.

Adapta Robotics started with competition team

What is the origin of Adapta Robotics?

Craciunescu: Cristian Dobre, Diana, and I started the company in 2015, but we had been a group in the University Politehnica of Bucharest, the largest technical university in Romania, starting in 2012. Our goal was to create robots for competitions, and we participated in competitions from Europe to Turkey to China.

We won most of them, and we’ve built line-following robots, sumo robots, and self-driving cars in a small scale for Continental. We then said, “OK, what’s next?” We had to choose between pure research in academia or starting a robotics company, and we wanted the more applied side of robotics, based on our success in those challenges.

How did you determine what tasks or applications to try to automate? As we saw at R-24 and other events, there are already a lot of robots out there, from disinfection to materials handling.

Craciunescu: Our first idea, the MATT testing robot, came totally by chance. We were exposed to a U.S. manufacturer that did its software development and testing in Romania before pushing updates to its whole fleet of mobile phones. Phones in the Asian market were a testing ground for it, and all of its software testing processes were automated.

During one test, the screen went blank, but behind the scenes, the processor was still doing the right tasks. The company didn’t catch it, and millions of phones went blank. Knowing about this issue and having the robotics experience we did, we said, “Why don’t we build a robot to test these phones?”

Then, we slowly saw similar needs in other industries like automotive manufacturing. Infotainment systems need to be tested, and automakers want to make sure everything works as intended. Many other use cases derive from that.

We identified the problems and clients, and then we did a bit of market research. There were a couple of competitors, but they were very expensive and had limited capabilities.

How did you arrive at inventory with ERIS?

Craciunescu: We had a client with a couple of issues in its stores. One, some products had labels showing the wrong prices.

Another was that [the retailer] knew from its systems that it had a certain amount of products in stock, but it did not know if those products were on the shelf or in the warehouse somewhere. If they’re not on the shelf, that means lost opportunities.

The company was aware of the solutions on the market, including inventory robots from the States, but those were too expensive for the task it wanted to do. And, if you scan a shelf with an autonomous robot, and you have a report that 20 labels are wrong, you still need a human to manually replace the labels.

The company didn’t care about the autonomous part; it just wanted the problem to be fixed, so we have a scanner on a pushcart. We focused on reading the labels correctly, detecting the products that are out of stock or soon will be, and creating a report on just those three features.

We’re also exploring other functionalities like planogram compliance, making sure the products are placed where they’re supposed to be, while checking if you have multiple labels of the same product displayed on the shelf. But that was the main idea: Create a relatively cheap solution to scan the shelf and give you an audit.

Adapta founders

Adapta founders, from left: Cristian Dobre, Mihai Craciunescu, and Diana Baicu. Source: Adapta Robotics

Know thy customers

Were you developing these systems for specific customers, or did you already have broader applications in mind? Did Adapta Robotics develop them with multiple customers at once, or did you start with one customer and then branch out?

Craciunescu: It was mixed. As an engineer, you can design all kinds of robots. Our approach is to try to solve problems that are specific to a certain industry.

Having the client tell us what they need is the most valuable feedback. We could think of different solutions in our lab, but when you are doing that in real life, you can quickly see what things to focus on.

It’s very important to have a client in the loop when you design something. Ideally, you should have more than one, but as we got started, we needed at least one to can use, let’s say, the first version of the robots. It can see what doesn’t work, and then you can improve on that. After you have a prototype, then you can look around in the market.

Baicu: When we’re getting this information from clients and designing a product, we try to make something more general that can be easily customized afterwards. You don’t want it customized from the beginning because that limits other possibilities, even for the development for that customer.

But then you need first customers that are patient, right? They have to be willing to work with you and understand that not everything’s going to work right away.

Craciunescu: That would be the ideal setup. Some clients were really pushy, and it was up to us to deliver. They can understand that something doesn’t work, but we had to fix it as soon as possible. We had quite a bit of pressure.

Adapta's MATT delta robot for product testing.

The MATT delta robot for product testing. Source: Adapta Robotics

Co-founders share lessons learned

The robotics development process is rarely a straight path. What are some of the lessons that you learned or surprises along the way?

Baicu: We know this from when we were building competition robots, but it became even more clear [with Adapta Robotics]. It’s about the design choices overall. … You need to have the right components and architectures from the beginning, at least with durability and scalability in mind, because otherwise, it will be very complicated to modify everything rather than just thinking about it from the beginning.

Of course, there’s a balance between how far you can go in these choices. Very expensive components or very complicated architectures take more time to implement and more money.

When you’re sourcing components, whether it’s sensors or actuators, do you have preferred partners? How did you identify what would work best, given your and your customers’ priorities?

Craciunescu: It’s an iterative process. Let’s take ERIS as an example. Initially, we made an educated guess about the best-in-class cameras we needed.

When we actually connected them to the computer, we saw that they were on USB 3.0. We had the right cameras, but the communication protocol made the processor waste a lot of time converting the serial information to actual pictures and data metrics you could use.

We wanted that processor to run other things, so the next step was to find some cameras on another protocol. We then looked at different distributors and so on.

Another aspect we did not have experience with was, for example, cameras to measure the distances. Our approach was to buy depth cameras from all the major manufacturers and test them internally. We had a couple of criteria — we knew we wanted to look at shelves that were up to 1 m in depth and knew the distances from the robot to the shelves.

We also looked at the company maturity, or if they could provide the cameras 10 years from now. If we’re happy with all these smaller decisions, we’ll pull the trigger.

If you know from prior experience what’s the right solution for you, that’s fine, but most of the time, you need tests to validate the right path. This makes R&D quite expensive, but sometimes, you don’t have the luxury to buy all the solutions out there.

The ERIS inventory-scanning system works with human associates.

The ERIS inventory-scanning system works with human associates. Source: Adapta Robotics

When to focus on integration and simulation

On the software side, Adapta Robotics’ customers may use different systems. How much work does integration involve?

Baicu: It’s a significant part of what we do. There’s a focus from the beginning on our side to create the software infrastructure and the intelligence as well, meaning the computer vision and algorithms, or machine learning and AI. It’s a process that needs to be supported.

First of all, there are the updates to improve or fix bugs, and at the same time, we maintain the algorithmic part with new data sets, examples, or retraining if needed.

How much do you rely on simulation for training and deployments?

Craciunescu: We try not to rely too much on simulation. We do some — for example, for mechanical stress testing. But we don’t go into the details like kinematics. We do what makes sense from an engineering point of view, as we want to build an actual product.

You can focus a lot on simulation, and that can be a trap because you can make the most beautiful simulations in the world and not have a product.

Baicu: At the same time, you can transpose simulations into real-life situations. The simulation is an idealized environment, so you have to introduce noise or variations, but it will never be the real world. Sometimes, it can be very complicated if you put too much effort on the simulation side.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Adapta gives a glimpse of its roadmap

What is Adapta Robotics planning for this year?

Craciunescu: MATT is designed to be flexible and for multiple use cases. This year, we’re looking at specialized versions of MATT for different industries, like refurbishment, automotive, and medical.

It’s currently used in those industries with add-ons, but there are not models specifically designed for each industry, which could help the selling process.

Are you focusing more on productizing or re-engineering your technology? For instance, MATT’s software suite now works with a six degree-of-freedom robot arm.

Baicu: A bit of both. We now have clients and know their needs. Maybe we just present it differently or add some features that make the product easier to set up and use. Sometimes, it’s about creating new add-ons or complementary solutions that can respond to the needs in that field of activity.

Craciunescu: With ERIS, we already have a client that’s mostly in the logistics space and requires things like detecting of barcodes. It’s similar to retail but a different application. We’re exploring ways of reusing parts of the hardware and software that we’ve developed.

Are you in the midst of fundraising? Are you looking to expand to new markets internationally?

Craciunescu: Yes. We’re in the process of raising capital and are in a due diligence phase. We’re currently at 10 highly skilled professionals, but capital would allow us to be more aggressive in markets such as automotive.

Currently, 50% of our clients are in the U.S., and the rest are Western Europe. We have a couple of clients in India and Brazil as well.

At R-24, we discussed the Danish robotics scene. What is the industry like in Romania?

Craciunescu: Denmark is an outlier, and it’s doing very well. The European market in general doesn’t encourage R&D, which is very cash-intensive. If you look across Europe, robotics requires funding from the EU and from each individual state.

There are other robotics companies in Romania, and we have a lot of talent locally. We sometimes find out about one another at events outside of Romania.

Baicu: Romania is quite well-developed on the IT or software development side. It’s fairly complicated to have a discussion for what the needs are for a company that does hardware.

With the rise of AI, we need to have a deeper consideration for what we are putting our efforts into. We’re now seeing a bit of a shift, and are seeing a better attitude toward manufacturing and hardware.

Craciunescu: The brain drain really affects us. As a young student willing to learn about robotics, I had no mentors. That’s a problem for the medical industry as well and society as a whole.

Right now, we’re trying at Adapta to provide a space for new students to come and learn from professionals. We had the option of going abroad but decided to build something locally. Being part of the EU, we can basically scale up anywhere we want.

The post Adapta Robotics execs explain development strategies for testing and inventory robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/adapta-robotics-execs-explain-development-strategies-for-testing-and-inventory-robots/feed/ 0