Academia / Research Archives - The Robot Report https://www.therobotreport.com/category/research-development/ Robotics news, research and analysis Mon, 24 Jun 2024 13:39:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Academia / Research Archives - The Robot Report https://www.therobotreport.com/category/research-development/ 32 32 Robotics Australia Group is building a sustainable robotics industry https://www.therobotreport.com/building-sustainable-robotics-industry-australia-role-robotics-australia-group/ https://www.therobotreport.com/building-sustainable-robotics-industry-australia-role-robotics-australia-group/#comments Sun, 23 Jun 2024 12:28:29 +0000 https://www.therobotreport.com/?p=579534 Robotics Australia Group has been working to elevate Australia's position in global robotics through collaboration and a national strategy.

The post Robotics Australia Group is building a sustainable robotics industry appeared first on The Robot Report.

]]>
Board of the Robotics Australia Group

The group’s board, as of November 2023, from left to right, back row: Dr. Sue Keay (chair), Brenton Cunningham, Christian Ruberg, Tim Bradley, Dr. Nathan Kirchner; front row: Dr. John Vial, Tamanna Monem, Kathie van Vugt, Nicci Rossouw, Angus Robinson. Source: Robotics Australia Group

The robotics industry in Australia stands at the precipice of a transformative era, driven by a shared vision of sustainability and innovation. At the forefront of this movement is the Robotics Australia Group, an organization committed to nurturing a comprehensive robotics ecosystem.

From companies developing cutting-edge robotic technologies to educational institutions cultivating future talent, the group supports all facets of this burgeoning industry. Its mission aligns with the broader national objectives, as recently underscored by the Australian government’s National Robotics Strategy.

National Robotics Strategy points the way to innovation

Ed Husic, MP and the minister for industry and science, recently announced the National Robotics Strategy. It marks a significant milestone for the Australian robotics sector, said the Robotics Australia Group.

“The strategy not only highlights the current achievements, but also lays a robust foundation for future developments,” stated Dr. Nathan Kirchner, founding director of the group. “It is a call to industry stakeholders to collaborate and drive forward this ambitious vision.” 

This strategy aims to accelerate the adoption of robotics and automation technologies across various industries, a move that is integral to the broader vision of a “Future Made in Australia.” The strategy is imbued with optimism, promising substantial advancements and positioning Australia as a leader in robotics innovation on the global stage.

Minister Husic’s declaration signaled the Australian government’s commitment to harnessing the potential of robotics to address the country’s unique challenges.

Some examples of the world-leading field robotics delivered by Australian group members.

Some examples of the world-leading field robotics delivered by group members. Source: Robotics Australia Group

Minister recognizes Robotics Australia contributions

The group said its contributions have been instrumental in shaping the current landscape of the Australian robotics industry. During his announcement of the National Robotics Strategy, Husic acknowledged its sustained efforts, active participation in the development of the strategy, the contributions made through publishing Australian Robotic Roadmaps, and continued advocacy.

“We have deep pockets of robotics excellence in Australia, we will become greatly more competitive on the world stage by joining them together,” said Kirchner. “The National Robotics Strategy is a significant step towards that. I am very proud that the underpinning groundwork of the Robotics Australia Group has been recognized.”

The organization has worked to support various stakeholders within the ecosystem. By fostering collaborations, facilitating research and development, and promoting educational initiatives, it said it has created a fertile ground for the robotics industry to thrive. The group added that it is working to ensure that the benefits of robotics and automation are accessible to a broad range of industries and applications.

Robotics provides Australia a strategic advantage

”We have overcome the core challenges of a very large land and sparsely populated country in order to deliver a number of notable outcomes,” said Kirchner. “Nevertheless, through doing so, we have developed a significant strategic advantage in the field hard robotics” 

Australia’s geographical and demographic characteristics make it an ideal candidate for pioneering advanced robotics, asserted the group. The country’s vast landmass, coupled with a relatively small and dispersed population, creates a unique set of challenges that robotics can effectively address. Remote areas often require complex tasks to be completed, and robots can significantly enhance efficiency and safety in these environments.

Moreover, Australia boasts a remarkable depth of local talent and expertise in both hardware and software aspects of robotics, said the organization.

Industries such as mining, ports, transport and logistics, construction, agriculture, and defense have long benefited from Australia’s field-hardened robotics intellectual property, the group added. This robust foundation of expertise and innovation positions Australia to leverage robotics in solving critical problems and improving operational efficiencies across these sectors, it said.

One of the cutting-edge manufacturing installations developed by Applied Robotics, a group member.

One of the cutting-edge manufacturing installations developed by Applied Robotics, a group member. Source: Robotics Australia Group

Sector celebrates wins and looks ahead

“The announcement of the National Robotics Strategy is an exciting and commendable first step,” said the group. “However, it is essential to recognize that this is merely the beginning. The path to a fully realized, sustainable robotics industry in Australia requires continued effort and focus. While we celebrate this significant achievement, it is crucial to remain vigilant and committed to solidifying these initial steps to ensure long-term progress.”

The future of robotics in Australia holds immense potential, it noted. By using the momentum generated by the National Robotics Strategy, the nation’s industry can aspire to new heights on the global stage. This requires a concerted effort from all stakeholders to foster an environment conducive to innovation, collaboration, and international exchange, the group said.

“With the National Robotics Strategy as a guiding framework, Australia is poised to become a global leader in robotics and automation,” said Kirchner.

This vision can only be realized through collective effort and a strategic approach to international collaboration. By establishing a bi-directional conduit for deep commercial exchange in robotics and AI, Australia can position itself at the forefront of technological innovation.

The future success of the robotics industry hinges on the ability to integrate advanced technologies into practical applications that address real-world challenges. The group said that it and other industry stakeholders must continue to advocate for policies and initiatives that support research, development, and the commercialization of robotics technologies.

“The commitment of the Robotics Australia Group to building a sustainable robotics industry in Australia is both inspiring and crucial,” Kirchner said. “Their efforts, coupled with the strategic direction provided by the National Robotics Strategy, pave the way for a future where robotics and automation play a central role in addressing the nation’s unique challenges. By celebrating current achievements and maintaining a steadfast focus on future goals, Australia can achieve remarkable advancements in the robotics industry.”

In this journey, it is essential to remain proactive, collaborative, and visionary. With a collective effort, the vision of a “Future Made in Australia” powered by advanced robotics is not just a dream, but also an imminent reality. The group is currently spearheading the production of the third edition of the Robotics Roadmap for Australia, scheduled for release in 2025.

“Together, we can propel Australia to new heights of innovation and global leadership in the robotics sector,” said the group.

About the author

Dr. Nathan Kirchner, Robotics Australia GroupDr. Nathan G.E. Kirchner is a serial startup founder and advisor, corporate ventures advisor, professor, and founding director of a peak body. He has been recognized as one of “Australia’s Most Innovative” by Engineers Australia and one of the “Top Ten Young Scientists” by Popular Science magazine.

With over 25 years in industry and academia, Kirchner has founded and led several robotics-AI startups, and he serves as a founding director of the Robotics Australia Group. Kirchner is also a venture partner at a leading hardware-first venture capital firm.

He has held prestigious positions such as head of robotics at a major construction company and at Stanford University, the University of Technology Sydney, and Ohio State University.

The post Robotics Australia Group is building a sustainable robotics industry appeared first on The Robot Report.

]]>
https://www.therobotreport.com/building-sustainable-robotics-industry-australia-role-robotics-australia-group/feed/ 1
Only 16% of manufacturers has real-time visibility into production, says Zebra https://www.therobotreport.com/zebra_finds_only-16-percent-manufacturers-has-visibility-production/ https://www.therobotreport.com/zebra_finds_only-16-percent-manufacturers-has-visibility-production/#respond Thu, 20 Jun 2024 14:21:29 +0000 https://www.therobotreport.com/?p=579503 Manufacturers want more visibility into processes and to reskill staffers to work with automation, found Zebra and Azure Knowledge.

The post Only 16% of manufacturers has real-time visibility into production, says Zebra appeared first on The Robot Report.

]]>
Zebra's portfolio includes Fetch mobile robots for parts fulfillment.

Zebra’s portfolio includes FlexShelf robots for parts fulfillment. Source: Zebra Technologies

Only 1 in 6 manufacturers has a clear understanding of its own processes, according to a new study from Zebra Technologies Corp. The report also found that 61% of manufacturers expect artificial intelligence to drive growth by 2029, up from 41% in 2024.

Zebra said the surge in AI interest, along with 92% of survey respondents prioritizing digital transformation, demonstrates manufacturers’ intent to improve data management and use new technologies that enhance visibility and quality throughout production.

“Manufacturers struggle with using their data effectively, so they recognize they must adopt AI and other digital technology solutions to create an agile, efficient manufacturing environment,” stated Enrique Herrera, industry principal for manufacturing at Zebra Technologies. “Zebra helps manufacturers work with technology in new ways to automate and augment workflows to achieve a well-connected plant floor where people and technology collaborate at scale.”

Zebra commissioned Azure Knowledge Corp. to conduct 1,200 online surveys among C-suite executives and IT and OT (information and operational technology) leaders within various manufacturing sectors. They included automotive, electronics, food and beverage, pharmaceuticals, and medical devices. Respondents were surveyed in Asia, Europe, Latin America, and North America.

The fully connected factory is elusive

Although manufacturers said digital transformation is a strategic priority, achieving a fully connected factory remains elusive, noted Zebra Technologies. The company asserted that visibility is key to optimizing efficiency, productivity, and quality on the plant floor.

However, only 16% of manufacturing leaders globally reported they have real-time, work-in-progress (WIP) monitoring across the entire manufacturing process, reported the 2024 Manufacturing Vision Study.

While nearly six in 10 manufacturing leaders said they expect to increase visibility across production and throughout the supply chain by 2029, one-third said getting IT and OT to agree on where to invest is a key barrier to digital transformation.

In addition, 86% of manufacturing leaders acknowledged that they are struggling to keep up with the pace of technological innovation and to securely integrate devices, sensors, and technologies throughout their facilities and supply chain. Zebra claimed that enterprises can use its systems for higher levels of security and manageability, as well as new analytics to elevate business performance.

Technology can augment workforce efficiency

Manufacturers are shifting their growth strategies by integrating and augmenting workers with AI and other technologies over the next five years, found Zebra’s study. Nearly three-quarters (73%) said they plan to reskill labor for data and technology usage, and seven in 10 said they expect to augment workers with mobility-enabling technology.

Manufacturers are implementing tools including tablets (51%), mobile computers (55%), and workforce management software (56%). In addition, 61% of manufacturing leaders said they plan to deploy wearable mobile computers.

Across the C-suite, IT, and OT understand how labor initiatives must extend beyond improving worker efficiency and productivity with technology. Six in 10 leaders ranked ongoing development, retraining/upskilling, and career path development to attract future talent as high priorities for their organizations.

Automation advances to optimize quality

The quest for quality has intensified as manufacturers across segments must do more with fewer resources. According to Zebra and Azure’s survey, global manufacturers said today’s most significant quality management issues are real-time visibility (33%), keeping up with new standards and regulations (29%), integrating data (27%), and maintaining traceability (27%).

Technology implementation plans are addressing these challenges. Over the next five years, many executives said they plan to implement robotics (65%), machine vision (66%), radio frequency identification (RFID; 66%), and fixed industrial scanners (57%).

Most survey respondents agreed that these automation decisions are driven by factors including the need to provide the workforce with high-value tasks (70%), meet service-level agreements (SLAs; 69%), and add more flexibility to their plant floors (64%).

Zebra Technologies shares regional findings

  • Asia-Pacific (APAC): While only 30% of manufacturing leaders said they use machine vision across the plant floor in APAC, 67% are implementing or planning to deploy this technology within the next five years.
  • Europe, the Middle East, and Africa (EMEA): In Europe, reskilling labor to enhance data and technology usage skills was the top-ranked workforce strategy for manufacturing leaders to drive growth today (46%) and in five years (71%).
  • Latin America (LATAM): While only 24% of manufacturing leaders rely on track and trace technology in LATAM, 74% are implementing or plan to implement the technology in the next five years.
  • North America: In this region, 68% of manufacturing leaders ranked deploying workforce development programs as their most important labor initiative.
Zebra shares results of manufacturing vision study and the connected factory.

The Manufacturing Vision Study provided insights around digitalization and the connected factory. Source: Zebra Technologies

Zebra to discuss digital transformation

While digital transformation is a priority for manufacturers, achieving it is fraught with obstacles, including the cost and availability of labor, scaling technology solutions, and the convergence of IT and OT, according to Zebra Technologies. The Lincolnshire, Ill.-based company said visibility is the first step to such transformation.

Emerging technologies such as robotics and AI enable manufacturers to use data to identify, react, and prioritize problems and projects so they can deliver incremental efficiencies that yield the greatest benefits, Zebra said. The company said it provides systems to enable businesses to intelligently connect data, assets, and people.

Zebra added that its portfolio, which includes software, mobile robots, machine vision, automation, and digital decisioning, can help boost visibility, optimize quality, and augment workforces. It has more than 50 years of experience in scanning, track-and-trace, and mobile computing systems.

The company has more than 10,000 partners across over 100 countries, as well as 80% of the Fortune 500 as customers. Zebra is hosting a webinar today about how to overcome top challenges to digitalization and automation.

The post Only 16% of manufacturers has real-time visibility into production, says Zebra appeared first on The Robot Report.

]]>
https://www.therobotreport.com/zebra_finds_only-16-percent-manufacturers-has-visibility-production/feed/ 0
At CVPR, NVIDIA offers Omniverse microservices, shows advances in visual generative AI https://www.therobotreport.com/nvidia-offers-omniverse-microservices-advances-visual-generative-ai-cvpr/ https://www.therobotreport.com/nvidia-offers-omniverse-microservices-advances-visual-generative-ai-cvpr/#respond Mon, 17 Jun 2024 13:00:07 +0000 https://www.therobotreport.com/?p=579457 Omniverse Cloud Sensor RTX can generate synthetic data for robotics, says NVIDIA, which is presenting over 50 research papers at CVPR.

The post At CVPR, NVIDIA offers Omniverse microservices, shows advances in visual generative AI appeared first on The Robot Report.

]]>
NVIDIA Omniverse Cloud Sensor RTX Generates Synthetic Data to Speed AI Development of Autonomous Vehicles, Robotic Arms, Mobile Robots, Humanoids and Smart Spaces

As shown at CVPR, Omniverse Cloud Sensor RTX microservices generate high-fidelity sensor simulation from
an autonomous vehicle (left) and an autonomous mobile robot (right). Sources: NVIDIA, Fraunhofer IML (right)

NVIDIA Corp. today announced NVIDIA Omniverse Cloud Sensor RTX, a set of microservices that enable physically accurate sensor simulation to accelerate the development of all kinds of autonomous machines.

NVIDIA researchers are also presenting 50 research projects around visual generative AI at the Computer Vision and Pattern Recognition, or CVPR, conference this week in Seattle. They include new techniques to create and interpret images, videos, and 3D environments. In addition, the company said it has created its largest indoor synthetic dataset with Omniverse for CVPR’s AI City Challenge.

Sensors provide industrial manipulators, mobile robots, autonomous vehicles, humanoids, and smart spaces with the data they need to comprehend the physical world and make informed decisions.

NVIDIA said developers can use Omniverse Cloud Sensor RTX to test sensor perception and associated AI software in physically accurate, realistic virtual environments before real-world deployment. This can enhance safety while saving time and costs, it said.

“Developing safe and reliable autonomous machines powered by generative physical AI requires training and testing in physically based virtual worlds,” stated Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA. “Omniverse Cloud Sensor RTX microservices will enable developers to easily build large-scale digital twins of factories, cities and even Earth — helping accelerate the next wave of AI.”

Omniverse Cloud Sensor RTX supports simulation at scale

Built on the OpenUSD framework and powered by NVIDIA RTX ray-tracing and neural-rendering technologies, Omniverse Cloud Sensor RTX combines real-world data from videos, cameras, radar, and lidar with synthetic data.

Omniverse Cloud Sensor RTX includes software application programming interfaces (APIs) to accelerate the development of autonomous machines for any industry, NVIDIA said.

Even for scenarios with limited real-world data, the microservices can simulate a broad range of activities, claimed the company. It cited examples such as whether a robotic arm is operating correctly, an airport luggage carousel is functional, a tree branch is blocking a roadway, a factory conveyor belt is in motion, or a robot or person is nearby.

Microservice to be available for AV development 

CARLA, Foretellix, and MathWorks are among the first software developers with access to Omniverse Cloud Sensor RTX for autonomous vehicles (AVs). The microservices will also enable sensor makers to validate and integrate digital twins of their systems in virtual environments, reducing the time needed for physical prototyping, said NVIDIA.

Omniverse Cloud Sensor RTX will be generally available later this year. NVIDIA noted that its announcement coincided with its first-place win at the Autonomous Grand Challenge for End-to-End Driving at Scale at CVPR.

The NVIDIA researchers’ winning workflow can be replicated in high-fidelity simulated environments with Omniverse Cloud Sensor RTX. Developers can use it to test self-driving scenarios in physically accurate environments before deploying AVs in the real world, said the company.

Two of NVIDIA’s papers — one on the training dynamics of diffusion models and another on high-definition maps for autonomous vehicles — are finalists for the Best Paper Awards at CVPR.

The company also said its win for the End-to-End Driving at Scale track demonstrates its use of generative AI for comprehensive self-driving models. The winning submission outperformed more than 450 entries worldwide and received CVPR’s Innovation Award.

Collectively, the work introduces artificial intelligence models that could accelerate the training of robots for manufacturing, enable artists to more quickly realize their visions, and help healthcare workers process radiology reports.

“Artificial intelligence — and generative AI in particular — represents a pivotal technological advancement,” said Jan Kautz, vice president of learning and perception research at NVIDIA. “At CVPR, NVIDIA Research is sharing how we’re pushing the boundaries of what’s possible — from powerful image-generation models that could supercharge professional creators to autonomous driving software that could help enable next-generation self-driving cars.”

Foundation model eases object pose estimation

NVIDIA researchers at CVPR are also presenting FoundationPose, a foundation model for object pose estimation and tracking that can be instantly applied to new objects during inference, without the need for fine tuning. The model uses either a small set of reference images or a 3D representation of an object to understand its shape. It set a new record on a benchmark for object pose estimation.

FoundationPose can then identify and track how that object moves and rotates in 3D across a video, even in poor lighting conditions or complex scenes with visual obstructions, explained NVIDIA.

Industrial robots could use FoundationPose to identify and track the objects they interact with. Augmented reality (AR) applications could also use it with AI to overlay visuals on a live scene.

NeRFDeformer transforms data from a single image

NVIDIA’s research includes a text-to-image model that can be customized to depict a specific object or character, a new model for object-pose estimation, a technique to edit neural radiance fields (NeRFs), and a visual language model that can understand memes. Additional papers introduce domain-specific innovations for industries including automotive, healthcare, and robotics.

A NeRF is an AI model that can render a 3D scene based on a series of 2D images taken from different positions in the environment. In robotics, NeRFs can generate immersive 3D renders of complex real-world scenes, such as a cluttered room or a construction site.

However, to make any changes, developers would need to manually define how the scene has transformed — or remake the NeRF entirely.

Researchers from the University of Illinois Urbana-Champaign and NVIDIA have simplified the process with NeRFDeformer. The method can transform an existing NeRF using a single RGB-D image, which is a combination of a normal photo and a depth map that captures how far each object in a scene is from the camera.

NVIDIA researchers have simplified the process of generating a 3D scene from 2D images using NeRFs.

Researchers have simplified the process of generating a 3D scene from 2D images using NeRFs. Source: NVIDIA

JeDi model shows how to simplify image creation at CVPR

Creators typically use diffusion models to generate specific images based on text prompts. Prior research focused on the user training a model on a custom dataset, but the fine-tuning process can be time-consuming and inaccessible to general users, said NVIDIA.

JeDi, a paper by researchers from Johns Hopkins University, Toyota Technological Institute at Chicago, and NVIDIA, proposes a new technique that allows users to personalize the output of a diffusion model within a couple of seconds using reference images. The team found that the model outperforms existing methods.

NVIDIA added that JeDi can be combined with retrieval-augmented generation, or RAG, to generate visuals specific to a database, such as a brand’s product catalog.

JeDi is a new technique that allows users to easily personalize the output of a diffusion model within a couple of seconds using reference images, like an astronaut cat that can be placed in different environments.

JeDi is a new technique that allows users to easily personalize the output of a diffusion model within a couple of seconds using reference images, like an astronaut cat that can be placed in different environments. Source: NVIDIA

Visual language model helps AI get the picture

NVIDIA said it has collaborated with the Massachusetts Institute of Technology (MIT) to advance the state of the art for vision language models, which are generative AI models that can process videos, images, and text. The partners developed VILA, a family of open-source visual language models that they said outperforms prior neural networks on benchmarks that test how well AI models answer questions about images.

VILA’s pretraining process provided enhanced world knowledge, stronger in-context learning, and the ability to reason across multiple images, claimed the MIT and NVIDIA team.

The VILA model family can be optimized for inference using the NVIDIA TensorRT-LLM open-source library and can be deployed on NVIDIA GPUs in data centers, workstations, and edge devices.

As shown at CVPR, VILA can understand memes and reason based on multiple images or video frames.

VILA can understand memes and reason based on multiple images or video frames. Source: NVIDIA

Generative AI drives AV, smart city research at CVPR

NVIDIA Research has hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars, and robotics. A dozen of the NVIDIA-authored CVPR papers focus on autonomous vehicle research.

Producing and Leveraging Online Map Uncertainty in Trajectory Prediction,” a paper authored by researchers from the University of Toronto and NVIDIA, has been selected as one of 24 finalists for CVPR’s best paper award.

In addition, Sanja Fidler, vice president of AI research at NVIDIA, will present on vision language models at the Workshop on Autonomous Driving today.

NVIDIA has contributed to the CVPR AI City Challenge for the eighth consecutive year to help advance research and development for smart cities and industrial automation. The challenge’s datasets were generated using NVIDIA Omniverse, a platform of APIs, software development kits (SDKs), and services for building applications and workflows based on Universal Scene Description (OpenUSD).

AI City Challenge synthetic datasets span multiple environments generated by NVIDIA Omniverse, allowing hundreds of teams to test AI models in physical settings such as retail and warehouse environments to enhance operational efficiency.

AI City Challenge synthetic datasets span multiple environments generated by NVIDIA Omniverse, allowing hundreds of teams to test AI models in physical settings such as retail and warehouse environments to enhance operational efficiency. Source: NVIDIA

Isha Salian headshot.About the author

Isha Salian writes about deep learning, science and healthcare, among other topics, as part of NVIDIA’s corporate communications team. She first joined the company as an intern in summer 2015. Isha has a journalism M.A., as well as undergraduate degrees in communication and English, from Stanford.

The post At CVPR, NVIDIA offers Omniverse microservices, shows advances in visual generative AI appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-offers-omniverse-microservices-advances-visual-generative-ai-cvpr/feed/ 0
Leading tractor manufacturers hosting annual hacking week https://www.therobotreport.com/leading-tractor-manufacturers-hosting-annual-hacking-week/ https://www.therobotreport.com/leading-tractor-manufacturers-hosting-annual-hacking-week/#respond Tue, 11 Jun 2024 20:11:52 +0000 https://www.therobotreport.com/?p=579386 CyberTractor Challenge encourages college students to hack the cloud-based solutions and physical hardware from AGCO, CNHI, John Deere and more.

The post Leading tractor manufacturers hosting annual hacking week appeared first on The Robot Report.

]]>
large group of students and professionals lined up in front of a tractor in a field.

The 2023 CyberTractor Challenge included a large number of participants. | Credit: CyberTractor Challenge

This week, top tractor manufacturers in the United States, including John Deere, CNHI, and AGCO, will hold the annual CyberTractor Challenge. This event encourages college students to try to breach the security of both the firms’ cloud-based solutions and physical hardware such as tractors, smart tools, and different IoT devices.

The CyberTractor Challenge is a five-day event aimed at students passionate about cybersecurity. It provides a platform for practical experience and professional guidance. During the event, industry experts emphasize the importance of diversity, hands-on experience, and professional mentorship. They also highlight the convergence of technology and agriculture, underlining the need for professionals with expertise in both fields. This event addresses the significant shortage of cyber talent and showcases the potential synergy between cybersecurity and agriculture, given the complexity of modern agricultural equipment.

Earlier this year, John Deere announced a new partnership with SpaceX and Starlink to bring high-speed internet to rural areas around the world to help connect all of the various smart devices on the modern farm to the cloud.

CyberTractor Challenge expands beyond John Deere

John Deere started the CyberTractor Challenge in 2022 as a sister event to the more well-known CyberTruck and CyberAuto challenges. College and university students gather on a real farm in Iowa to work with real equipment and real cybersecurity and engineering professionals. As the idea for CyberTractor grew, the challenge’s goals and scope changed from just focusing on the famous green and yellow tools to including peers from the industry. CyberTractor Challenge is now a 501(c)(3) non-profit organization and aims to enhance the overall security of the agtech industry.

“Industry experts, professors, and tractor company employees will be guiding them every step of the way,” said Ethan Luebbering, director of recruiting for the CyberTractor Challenge. “Our plan is to prepare them with all the skills and tools they need to be effective during the event and all of the experiences required to start a career in Cyber Security.”

The primary goals of the CyberTractor Challenge are:

  • Educating students about cybersecurity in the agriculture industry through hands-on learning and expert training.
  • Attracting and developing the next generation of cybersecurity talent for the agriculture industry.
  • Fostering collaboration between universities and agriculture companies on cybersecurity issues.
  • Identifying potential vulnerabilities in agricultural equipment and systems through a hackathon-style event.

Throughout the event, students gain knowledge in embedded software engineering and protocols like CANbus used in modern agricultural equipment. Working with industry professionals from the tractor companies, students learn cybersecurity topics such as penetration testing and red teaming techniques for finding vulnerabilities.

Many modern tractors and smart implements could be considered robots, sitting at the intersection of technology and agriculture. This weeklong hackathon provides the students with real-life, hands-on skills for identifying potential bugs or vulnerabilities. The companies benefit by identifying any vulnerabilities in a controlled environment, and farmers benefit because the agtech solutions are hardened against nefarious vulnerabilities.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


The event attracts students from universities across the country with interests in fields such as electrical engineering, computer science, and cybersecurity. The organizers also recruit professors from partner universities who teach cybersecurity courses and help educate the students. Participating institutions include Iowa State, Colorado State, and Dakota State University. Employees from the sponsoring agriculture companies conduct educational sessions, interact with the students, and evaluate any findings.

A weeklong hackathon

The week-long event kicks off with two days of educational sessions where industry experts and hackers teach students about embedded systems, protocols, and penetration testing techniques. Over the next two days, students apply their knowledge to develop hypotheses and test for potential vulnerabilities in the equipment in a hackathon-style setting. On the final day, students present any bugs or vulnerabilities they have discovered to the sponsoring companies.

Key outcomes of the CyberTractor Challenge include:

  • Identifying potential vulnerabilities in agricultural equipment cybersecurity.
  • Attracting and developing top cybersecurity talent for the agriculture industry by exposing students to career opportunities.
  • Fostering collaboration and information sharing between universities and agriculture companies on cybersecurity best practices.
  • Advancing cybersecurity standards and regulations for the agriculture industry through discussions among participating organizations.
  • Building awareness of the importance of cybersecurity in agriculture and attracting diverse talent beyond traditional IT fields.
  • Providing hands-on, experiential learning for students that complements their academic studies.

Catch up with the latest in agricultural autonomy on The Robot Report Podcast. Chris Padwick discusses John Deere’s use of machine vision and AI in episode 149, and Marc Kermisch from CNHI talks about digitization and autonomy in agriculture on episode 138.

The post Leading tractor manufacturers hosting annual hacking week appeared first on The Robot Report.

]]>
https://www.therobotreport.com/leading-tractor-manufacturers-hosting-annual-hacking-week/feed/ 0
ETRI develops omnidirectional tactile sensors for robot hands https://www.therobotreport.com/etri-develops-omnidirectional-tactile-sensors-for-robot-hands/ https://www.therobotreport.com/etri-develops-omnidirectional-tactile-sensors-for-robot-hands/#respond Sat, 01 Jun 2024 14:00:08 +0000 https://www.therobotreport.com/?p=579256 ETRI is introducing a tactile sensor fits into a robotic finger with stiffness and shape similar to a human finger.

The post ETRI develops omnidirectional tactile sensors for robot hands appeared first on The Robot Report.

]]>
ETRI's robotic hand with omnidirectional tactile sensors.

ETRI’s robotic hand with omnidirectional tactile sensors. | Source: ETRI

Researchers from the Electronics and Telecommunications Research Institute (ETRI) are developing tactile sensors that detect pressure regardless of the direction it’s applied.

ETRI is introducing a tactile sensor that fits into a robotic finger with stiffness and a shape similar to a human finger. The robotic finger can flexibly handle everything from hard objects to deformable soft objects.

“This sensor technology advances the interaction between robots and humans by a significant step and lays the groundwork for robots to be more deeply integrated into our society and industries,” said Kim Hye-jin, a principal researcher at ETRI’s Intelligent Components and Sensors Research Section.

ETRI entered into a mutual cooperation agreement with Wonik Robotics to jointly develop the technology. Wonik will bring its expertise in robotic hands, including its experience developing the “Allegro Hand,” to the project. It previously supplied robotic hands to companies like Meta Platforms, Google, NVIDIA, Microsoft, and Boston Dynamics. The companies jointly exhibited related achievements at the Smart Factory & Automotive Industry Exhibition, held at COEX in Seoul.

Overcoming the technical limitations of pressure sensors

ETRI's robotic hand with tactile sensors also includes LED lights that change colors according to pressure changes.

ETRI’s robotic hand with tactile sensors also includes LED lights that change colors according to pressure changes. | Source: ETRI

ETRI’s research team says its technology can overcome the technical limitations of pressure sensors applied to existing robotic fingers. Previously, these sensors could show distorted signals depending on the direction in which the object was grasped. The team said it’s also highly rated for its performance and reliability. 

The sensor can detect pressure from various directions, even in a three-dimensional finger form, while also providing the flexibility to handle objects as naturally as a human hand, according to the team. These abilities make up the core of the technology. 

ETRI was able to advance this robotic finger technology by integrating omnidirectional pressure sensing with flexible air chamber tactile sensor technology, high-resolution signal processing circuit technology, and intelligent algorithm technology capable of real-time determination of an object’s stiffness. 

Additionally, the team enhanced the sensor’s precision in pressure detection by including LED lights that change colors according to pressure changes. This provides intuitive feedback to users. The team took this a step further and also integrated vibration detection and wireless communication capabilities to further strengthen communication between humans and robots. 

Unlike traditional sensors, which have sensors directly placed in the area where pressure is applied, these tactile sensors are not directly exposed to the area where pressure is applied. This allows for stable operation over long periods, even with continuous contact. The team says this improves the scalability of applications for robotic hands. 

Looking ahead to the future

ETRI says the development of intelligent robotic hands that can adjust its grip strength according to the stiffness of objects will bring about innovation in ultra-precise object recognition. The team expects the commercialization timeline to begin in the latter half of 2024. 

Sensitive and robust robotic fingers could help robots perform more complex and delicate tasks in various fields, including in the manufacturing and service sectors. ETRI expects that, through tactile sensor technology, robots will be able to manipulate a wide range of objects more precisely and significantly improve interaction with humans.

In the future, the research team plans to develop an entire robotic hand with these tactile sensors. Additionally, they aim to extend their development to a super-sensory hand that surpasses human sensory capabilities, including pressure temperature, humidity, light, and ultrasound sensors.

Through the team’s collaboration with Wonik, it has developed a robotic hand capable of recognizing objects through tactile sensors and flexibility controlling force. The research team plans to continue various studies to enable robots to handle objects and perceive the world as human hands do through sensors. 

The post ETRI develops omnidirectional tactile sensors for robot hands appeared first on The Robot Report.

]]>
https://www.therobotreport.com/etri-develops-omnidirectional-tactile-sensors-for-robot-hands/feed/ 0
Interact Analysis predicts strong global cobot market growth https://www.therobotreport.com/interact-analysis-predicts-strong-global-cobot-market-growth/ https://www.therobotreport.com/interact-analysis-predicts-strong-global-cobot-market-growth/#respond Thu, 30 May 2024 17:43:12 +0000 https://www.therobotreport.com/?p=579232 Interact Analysis said the global cobot market exceeded $1 billion in 2023, with strong growth forecast 2024-28.

The post Interact Analysis predicts strong global cobot market growth appeared first on The Robot Report.

]]>
Chart of cobot market growth from 2019-2028.

Global cobot market revenues are set to increase at >20% a year from 2024 to 2028. | Credit: Interact Analysis

The global collaborative robot (cobot) market topped $1 billion in revenues during 2023, despite overall demand recovering more slowly than expected post-pandemic, according to new data from Interact Analysis. Looking to the future, the market intelligence specialist predicts the global market for cobots will see a 22% increase in shipments during 2024 and anticipates similar levels of growth (>20%) each year through 2028.

  • Global cobot market revenue exceeded $1 billion in 2023, despite lower-than-expected growth
  • There has been a clear shift from individual to holistic solutions
  • Market demand for cobots recovered more slowly than anticipated, but revenues are expected to grow at >20% between 2024-28

Interact Analysis also recently documented an increase in demand for integrated robot control. As recently as 2021, global tech market advisory firm ABI Research also predicted that the cobot market would grow substantially over the coming decade. According to ABI, the market had a global valuation of $475 million in 2020 (slightly lower than Interact’s numbers), which would expand to $600 million in 2021 and $8 billion in 2030, with a projected CAGR of 32.5%.

Interact Analysis believes that annual revenue growth for cobots was around 11.9% in 2023, despite a challenging year for manufacturing, tough economic conditions, and supply chain issues. Demand for cobots in the automotive and new energy industries remained high last year, but demand for cobots in electronics and semiconductors fell significantly, leading to a small V-shaped trajectory between 2022-24. Orders from the semiconductor and logistics industries are expected to bounce back in 2024, but high interest rates could weaken overall order intake this year.

Universal Robots (UR) remains the cobot market leader. UR generated $304 million in revenue in 2023


SITE AD for the 2024 RoboBusiness registration now open.Register now.


“The global cobot market is becoming more refined, as end-users seek out holistic solutions rather than purchasing large pieces of equipment,” said Interact Analysis research manager Maya Xiao. “Over the coming year, we expect to see major cobot vendors target large customers, which could impact capacity and resource allocation.

“Moving forward, China will dominate the global cobots market in the medium term, but it is also the region where average revenue per unit (ARPU) is expected to fall most sharply, as competition in the market increases. The cobot market growth rate in regions other than China will approach that of China after 2025 and the global average market price for cobots is expected to increase slightly between 2023 and 2028 as demand grows for collaborative robots capable of handling larger payloads.”

This report answers these and other key questions facing the industry today. A premium version of the report is available with quarterly market movement updates, a mid-year market forecast, and granular data about cobots by country, by industry, and by payload.

The post Interact Analysis predicts strong global cobot market growth appeared first on The Robot Report.

]]>
https://www.therobotreport.com/interact-analysis-predicts-strong-global-cobot-market-growth/feed/ 0
2024 edition of U.S. robotics roadmap points to need for more federal coordination https://www.therobotreport.com/2024-edition-of-u-s-robotics-roadmap-points-to-need-for-more-federal-coordination/ https://www.therobotreport.com/2024-edition-of-u-s-robotics-roadmap-points-to-need-for-more-federal-coordination/#respond Thu, 30 May 2024 16:40:12 +0000 https://www.therobotreport.com/?p=579223 The 2024 edition of "A Roadmap for U.S. Robotics" calls for a more strategic approach and retraining for economic growth.

The post 2024 edition of U.S. robotics roadmap points to need for more federal coordination appeared first on The Robot Report.

]]>
Cover of the 2024 U.S. robotics roadmap.

Cover of the 2024 U.S. robotics roadmap. Source: UC San Diego

Unlike China, Germany, or Japan, the U.S. doesn’t have a centralized industrial policy. The U.S. has a culture that uniquely encourages innovation, but a lack of strong coordination among academia, government, and industry affects the development and deployment of technologies such as robotics, according to the 2024 edition of “A Roadmap for U.S. Robotics: Robotics for a Better Tomorrow.”

The quadrennial set of recommendations is produced and sponsored by institutions led by the University of California, San Diego. The authors sent the latest edition to presidential campaigns, the AI Caucus in Congress, and the investment community, noted Henrik I . Christensen, main editor of the roadmap.

Henrik Christensen, UC San Diego

Henrik Christensen, UC San Diego

Christensen is the Qualcomm Chancellor’s Chair of Robot Systems and a distinguished professor of computer science at the Department of Computer Science and Engineering at UC San Diego. He is also the director of the Contextual Robotics Institute, the Cognitive Robotics Laboratory, and the Autonomous Vehicle Laboratory.

In addition, Christensen is a serial entrepreneur, co-founding companies including Robust.AI. He is also an investor through firms such as ROBO Global, Calibrate Ventures, Interwoven, and Spring Mountain Capital.

The Robot Report spoke with Christensen about the latest “Roadmap for U.S. Robotics.”

Robotics roadmap gives a mixed review

How does this year’s roadmap compare with its predecessors?

Christensen: We’ve been doing this since 2009 and have aligned it to the federal elections. We did do a midterm report in 2022, and the current report card is mixed.

For instance, we’ve seen investments in laboratory automation and anticipated the workforce shortage because of demographics and changes in immigration policies. The COVID-19 pandemic also accelerated interest in e-commerce, supply chain automation, and eldercare.

The government support has been mixed. The National Robotics Initiative has sunset, and there have been no meetings of the Congressional Caucus on Robotics since 2019. Recently, we did have a robot showcase with the Congressional Caucus for AI.

With all of the recent attention on artificial intelligence, how does that help or hurt robotics?

Christensen: Some of the staffers of the AI caucus used to go to robotics caucus meetings. The AI initiative created about six years ago rolled up robotics, but in the end without any new funding for robotics.

Robotics, in many respects, is where AI meets reality. With the workforce shortage, there is a dire need for new robot technology to ensure growth of the U.S. economy.

We’ve heard that reshoring production is part of the answer, but it’s not clear that there must be a corresponding investment in R&D to make it happen. Without a National Robotics Initiative, there’s also no interagency coordination.

Carnegie Mellon University co-hosted a Congressional robotics showcase along with the release of the 2024 U.S. Robotics Roadmap.

CMU co-hosted a Senate Robotics Showcase and Demo Day. Graduate student Richard Desatnik demonstrated a glove that remotely operated a soft robot on table. Source: Carnegie Mellon University

Christensen calls for more federal coordination

Between corporations, academic departments, and agencies such as DARPA and NASA, isn’t there already investment in robotics research and development?

Christensen: Multiple agencies sponsor robotics, in particular in the defense sector. The foundational research is mainly sponsored by the National Science Foundation, and the programs come across uncoordinated.

The roadmap isn’t asking for more money for robotics R&D; it’s recommending that available programs be better coordinated and directed toward more widespread industrial and commercial use.

While venture capital has been harder to get in the past few years, how would you describe the U.S. startup climate?

Christensen: We’re seeing a lot of excitement in robotics, with companies like Figure AI. While resources have gone into fundamental research, we need an full applications pipeline and grounded use cases.

Right now, most VCs are conservative, and interest rates have made it harder to get money. Last year, U.S. industrial automation was down 30%, which has been a challenge for robotics.

Why do you think that happened?

Christensen: It was a combination of factors, including COVID. Companies over-invested based on assumptions but then couldn’t invest in infrastructure. Investment in facilities is limited until we get better interest rates.

The 2024 U.S. Roadmap for Robotics shared data from the IFR and BLS.

The latest robotics roadmap said both automation and employment lead to economic growth, as shown by data from the International Federation of Robotics and the Bureau of Labor Statistics. Click here to enlarge. Source: “A Roadmap for U.S. Robotics”

The U.S. can regain robotics leadership

When do you think that might turn around? What needs to happen?

Christensen: In the second half of the year, robotics could pick up quickly. More things, like semiconductors, are moving back to the U.S., and manufacturing and warehousing are short by millions of workers.

Reshoring hasn’t happened at scale, and there’s not enough R&D, but the U.S. also needs to retrain its workforce. There are a few trade schools with a robotics focus, and we need the federal government to assist in emphasizing the need for retraining to allow more reshoring.

What other enabling factors are needed in Washington?

Christensen: The OSTP [White House Office of Science and Technology] had limited staffing in the previous administration, and we can’t afford another two years of that. We need to hold Washington accountable, and the U.S. industrial sector needs agility.

The robotics community has a big challenge to educate people about the state of the industry. Americans think we’re better than we actually are. We’re not in the top five automotive producers; it’s actually China, Japan, Germany, South Korea, and India. No major industrial robotics supplier is based in the U.S.

When we started these roadmaps, the U.S. was in the top four in industrial robot consumption and a leader in service robotics. Now, it’s no longer in the top 10.

The future for iRobot, the only U.S. household name in robotics, isn’t pretty after its deal fell through with Amazon, at least partly because of antitrust scrutiny. We need to assist our companies to remain competitive.

How might the U.S. get its act together with regard to robotics policy? Australia just launched its own National Robotics Strategy.

Christensen: We shouldn’t let robotics go. I left Denmark about 30 years ago, and the robotics cluster there started after Maersk moved its shipyard to South Korea. The city of Odense and local universities, with national government support, all invested in an ecosystem that led to the formation of Universal Robots and Mobile Industrial Robots. Today, Odense is the capital of robotics in Europe.

Recently, the Port of Odense launched a robotics center for large structures. It continues to grow its ecosystem. It shows why it’s worth it for nations to think strategically about robotics.

We’re in talks to revitalize the Congressional Robotics Caucus and with Robust.AI. We can also show how the advances in AI can help grow robotics.

Manufacturing job openings currently exceed unemployment rates.

Manufacturing job openings currently exceed unemployment rates. Source: BLS.gov

The post 2024 edition of U.S. robotics roadmap points to need for more federal coordination appeared first on The Robot Report.

]]>
https://www.therobotreport.com/2024-edition-of-u-s-robotics-roadmap-points-to-need-for-more-federal-coordination/feed/ 0
NVIDIA, ORBIT-Surgical teach surgical robots key skills in simulation https://www.therobotreport.com/nvidia-orbit-surgical-teach-surgical-robots-key-skills-simulation/ https://www.therobotreport.com/nvidia-orbit-surgical-teach-surgical-robots-key-skills-simulation/#respond Sun, 26 May 2024 12:31:42 +0000 https://www.therobotreport.com/?p=579094 ORBIT-Surgical, developed using NVIDIA Isaac Sim and NVIDIA Omniverse, showed how to train robots to move a needle at ICRA.

The post NVIDIA, ORBIT-Surgical teach surgical robots key skills in simulation appeared first on The Robot Report.

]]>

A collaboration between NVIDIA and academic researchers is prepping robots for surgery. Researchers from the University of Toronto, UC Berkeley, ETH Zurich, Georgia Tech and NVIDIA developed ORBIT-Surgical. It is a simulation framework to train robots that could augment the skills of surgical teams while reducing surgeons’ cognitive load.

ORBIT-Surgical supports more than a dozen maneuvers inspired by the training curriculum for laparoscopic procedures, a.k.a. minimally invasive surgery. Examples include grasping small objects like needles, passing them from one arm to another, and placing them with high precision.

The researchers built the physics-based framework using NVIDIA Isaac Sim, a robotics simulation platform for designing, training and testing AI-based robots. They trained reinforcement learning and imitation learning algorithms on NVIDIA GPUs and used NVIDIA Omniverse, a platform for developing and deploying advanced 3D applications. The university and NVIDIA collaborators also used pipelines based on Universal Scene Description (OpenUSD) to enable photorealistic rendering.

The Intuitive Foundation, a nonprofit supported by robotic surgery leader Intuitive Surgical, provided the community-supported da Vinci Research Kit (dVRK). With it, the ORBIT-Surgical research team demonstrated how training a digital twin in simulation transfers to a physical robot in a lab environment in the video below.

The researchers presented ORBIT-Surgical this month at ICRA, the IEEE International Conference on Robotics and Automation in Yokohama, Japan. The open-source code package is now available on GitHub.

A stitch in AI saves nine with ORBIT-Surgical

ORBIT-Surgical is based on Isaac Orbit, a modular framework for robot learning built on Isaac Sim. Orbit includes support for various libraries for reinforcement learning and imitation learning, where artificial intelligence agents are trained to mimic ground-truth expert examples.

The surgical framework enables developers to train robots like the dVRK to manipulate both rigid and soft objects using reinforcement learning and imitation learning frameworks running on NVIDIA RTX GPUs.

ORBIT-Surgical introduced more than a dozen benchmark tasks for surgical training, including one-handed tasks such as picking up a piece of gauze, inserting a shunt into a blood vessel (see video below), or lifting a suture needle to a specific position. It also included two-handed tasks, like handing a needle from one arm to another, passing a threaded needle through a ring pole, and reaching two arms to specific positions while avoiding obstacles.

By developing a surgical simulator that takes advantage of GPU acceleration and parallelization, the team said it was able to boost robot learning speed by an order of magnitude compared to existing surgical frameworks. The researchers found that the robot’s digital twin could be trained to complete tasks like inserting a shunt and lifting a suture needle in under two hours on a single NVIDIA RTX GPU.

With the visual realism enabled by rendering in Omniverse, ORBIT-Surgical also allowed researchers to generate high-fidelity synthetic data, which could help train AI models for perception tasks such as segmenting surgical tools in real-world videos captured in the operating room.

A proof of concept showed that combining simulation and real-world data significantly improved the accuracy of an AI model to segment surgical needles from images — helping reduce the need for large, expensive real-world datasets for training such models, said the team.

Read the paper behind ORBIT-Surgical, and learn more about NVIDIA-authored papers at ICRA.

Isha Salian, NVIDIAAbout the author

Isha Salian writes on deep learning, science, and healthcare, among other topics, as part of NVIDIA’s corporate communications team. She first joined the company as an intern in summer 2015. Salian has a journalism M.A., as well as undergraduate degrees in communication and English, from Stanford University.

Editor’s note: This article was syndicated from NVIDIA’s blog.

The post NVIDIA, ORBIT-Surgical teach surgical robots key skills in simulation appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-orbit-surgical-teach-surgical-robots-key-skills-simulation/feed/ 0
Stanford researcher discusses UMI gripper and diffusion AI models https://www.therobotreport.com/interview-with-chung-chi-about-the-umi-gripper-and-diffusion-ai-models/ https://www.therobotreport.com/interview-with-chung-chi-about-the-umi-gripper-and-diffusion-ai-models/#respond Sat, 25 May 2024 14:30:46 +0000 https://www.therobotreport.com/?p=579086 Stanford Ph.D. researcher Cheng Chi discusses the development of the UMI gripper and the use of diffusion AI models for robotics.

The post Stanford researcher discusses UMI gripper and diffusion AI models appeared first on The Robot Report.

]]>

The Robot Report recently spoke with Ph.D. student Cheng Chi about his research at Stanford University and recent publications about using diffusion AI models for robotics applications. He also discussed the recent universal manipulation interface, or UMI gripper, project, which demonstrates the capabilities of diffusion model robotics.

The UMI gripper was part of his Ph.D. thesis work, and he has open-sourced the gripper design and all of the code so that others can continue to help evolve the AI diffusion policy work.

AI innovation accelerates

How did you get your start in robotics?

headshot of Cheng Chi.

Stanford researcher Cheng Chi. | Credit: Huy Ha

I worked in the robotics industry for a while, starting at the autonomous vehicle company Nuro, where I was doing localization and mapping.

And then I applied for my Ph.D. program and ended up with my advisor Shuran Song. We were both at Columbia University when I started my Ph.D., and then last year, she moved to Stanford to become full-time faculty, and I moved [to Stanford] with her.

For my Ph.D. research, I started as a classical robotics researcher, and I started working with machine learning, specifically for perception. Then in early 2022, diffusion models started to work for image generation, that’s when DALL-E 2 came out, and that’s also when Stable Diffusion came out.

I realized the specific ways which diffusion models could be formulated to solve a couple of really big problems for robotics, in terms of end-to-end learning and in the actual representation for robotics.

So, I wrote one of the first papers that brought the diffusion model into robotics, which is called diffusion policy. That’s my paper for my previous project before the UMI project. And I think that’s the foundation of why the UMI gripper works. There’s a paradigm shift happening, my project was one of them, but there are also other robotics research projects that are also starting to work.

A lot has changed in the past few years. Is artificial intelligence innovation is accelerating?

Yes, exactly. I experienced it firsthand in academia. Imitation learning was the dumbest thing possible you could do for machine learning with robotics. It’s like, you teleoperate the robot to collect data, the data is paired with images and the corresponding actions.

In class, we’re taught that people proved that in this paradigm of imitation learning or behavior, cloning doesn’t work. People proved that errors grow exponentially. And that’s why you need reinforcement learning and all the other methods that can address these limitations.

But fortunately, I wasn’t paying too much attention in class. So I just went to the lab and tried it, and it worked surprisingly well. I wrote the code, I applied the diffusion model to this and for my first task; it just worked. I said, “That’s too easy. That’s not worth a paper.”

I kept adding more tasks like online benchmarks, trying to break the algorithm so that I could find a smart angle that I could improve on this dumb idea that would give me a paper, but I just kept adding more and more things, and it just refused to break.

So there are simulation benchmarks online. I used four different benchmarks and just tried to find an angle to break it so that I could write a better paper, but it just didn’t break. Our baseline performance was 50% to 60%. And after applying the diffusion model to that, it was like 95%. So it was a jump in terms of these. And that’s the moment I realized, maybe there’s something big happening here.

UR5 cobot push a "T" around a table.

The first diffusion policy research at Columbia was to push a T into position on a table. | Credit: Cheng Chi

How did those findings lead to published research?

That summer, I interned at Toyota Research Institute, and that’s where I started doing real-world experiments using a UR5 [cobot] to push a block into a location. It turned out that this worked really well on the first try.

Normally, you need a lot of tuning to get something to work. But this was different. When I tried to perturb the system, it just kept pushing it back to its original place.

And so that paper got published, and I think that’s my proudest work, I made the paper open-source, and I open-sourced all the code because the results were so good, I was worried that people were not going to believe it. As it turned out, it’s not a coincidence, and other people can reproduce my results and also get very good performance.

I realized that now there’s a paradigm shift. Before [this UMI Gripper research], I needed to engineer a separate perception system, planning system, and then a control system. But now I can combine all of them with a single neural network.

The most important thing is that it’s agnostic to tasks. With the same robot, I can just collect a different data set and train a model with a different data set, and it will just do the different tasks.

Obviously, collecting the data set part is painful, as I need to do it 100 to 300 times for one environment to get it to work. But in actuality, it’s maybe one afternoon’s worth of work. Compared to tuning a sim-to-real transfer algorithm takes me a few months, so this is a big improvement.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


UMI Gripper training ‘all about the data’

When you’re training the system for the UMI Gripper, you’re just using the vision feedback and nothing else?

Just the cameras and the end effector pose of the robot — that’s it. We had two cameras: one side camera that was mounted onto the table, and the other one on the wrist.

That was the original algorithm at the time, and I could change to another task and use the same algorithm, and it would just work. This was a big, big difference. Previously, we could only afford one or two tasks per paper because it was so time-consuming to set up a new task.

But with this paradigm, I can pump out a new task in a few days. It’s a really big difference. That’s also the moment I realized that the key trend is that it’s all about data now. I realized after training more tasks, that my code hadn’t been changed for a few months.

The only thing that changed was the data, and whenever the robot doesn’t work, it’s not the code, it’s the data. So when I just add more data, it works better.

And that prompted me to think, that we are into this paradigm of other AI fields as well. For example, large language models and vision models started with a small data regime in 2015, but now with a huge amount of internet data, it works like magic.

The algorithm doesn’t change that much. The only thing that changed is the scale of training, and maybe the size of the models, and makes me feel like maybe robotics is about to enter that that regime soon.

two UR cobots fold a shirt using UMI gripper.

Two UR cobots equipped with UMI grippers demonstrate the folding of a shirt. | Credit: Cheng Chi video

Can these different AI models be stacked like Lego building blocks to build more sophisticated systems?

I believe in big models, but I think they might not be the same thing as you imagine, like Lego blocks. I suspect that the way you build AI for robotics will be that you take whatever tasks you want to do, you collect a whole bunch of data for the task, run that through a model, and then you get something you can use.

If you have a whole bunch of these different types of data sets, you can combine them, to train an even bigger model. You can call that a foundation model, and you can adapt it to whatever use case. You’re using data, not building blocks, and not code. That’s my expectation of how this will evolve.

But simultaneously, there’s a there’s a problem here. I think the robotics industry was tailored toward the assumption that robots are precise, repeatable, and predictable. But they’re not adaptable. So the entire robotics industry is geared towards vertical end-use cases optimized for these properties.

Whereas robots powered by AI will have different sets of properties, and they won’t be good at being precise. They won’t be good at being reliable, they won’t be good at being repeatable. But they will be good at generalizing to unseen environments. So you need to find specific use cases where it’s okay if you fail maybe 0.1% of the time.

Safety versus generalization

Robots in industry must be safe 100% of the time. What do you think the solution is to this requirement?

I think if you want to deploy robots in use cases where safety is critical, you either need to have a classical system or a shell that protects the AI system so that it guarantees that when something bad happens, at least there’s a worst-case scenario to make sure that something bad doesn’t actually happen.

Or you design the hardware such that the hardware is [inherently] safe. Hardware is simple. Industrial robots for example don’t rely that much on perception. They have expensive motors, gearboxes, and harmonic drives to make a really precise and very stiff mechanism.

When you have a robot with a camera, it is very easy to implement vision servoing and make adjustments for imprecise robots. So robots don’t have to be precise anymore. Compliance can be built into the robot mechanism itself, and this can make it safer. But all of this depends on finding the verticals and use cases where these properties are acceptable.

The post Stanford researcher discusses UMI gripper and diffusion AI models appeared first on The Robot Report.

]]>
https://www.therobotreport.com/interview-with-chung-chi-about-the-umi-gripper-and-diffusion-ai-models/feed/ 0
Robot ‘SuperLimbs’ help astronauts stand up after falling https://www.therobotreport.com/robot-superlimbs-help-astronauts-stand-up-after-falling/ https://www.therobotreport.com/robot-superlimbs-help-astronauts-stand-up-after-falling/#respond Sun, 19 May 2024 14:00:03 +0000 https://www.therobotreport.com/?p=579107 The design could prove useful in the coming years, with the launch of NASA’s Artemis mission, which plans to send astronauts back to the moon for the first time in over 50 years.

The post Robot ‘SuperLimbs’ help astronauts stand up after falling appeared first on The Robot Report.

]]>

Need a moment of levity? Try watching videos of astronauts falling on the moon. NASA’s outtakes of Apollo astronauts tripping and stumbling as they bounce in slow motion are delightfully relatable.

For MIT engineers, the lunar bloopers also highlight an opportunity to innovate.

“Astronauts are physically very capable, but they can struggle on the moon, where gravity is one-sixth that of Earth’s but their inertia is still the same. Furthermore, wearing a spacesuit is a significant burden and can constrict their movements,” says Harry Asada, professor of mechanical engineering at MIT. “We want to provide a safe way for astronauts to get back on their feet if they fall.”

Asada and his colleagues are designing a pair of wearable robotic limbs that can physically support an astronaut and lift them back on their feet after a fall. The system, which the researchers have dubbed Supernumerary Robotic Limbs or “SuperLimbs” is designed to extend from a backpack, which would also carry the astronaut’s life support system, along with the controller and motors to power the limbs.

The researchers have built a physical prototype, as well as a control system to direct the limbs, based on feedback from the astronaut using it. The team tested a preliminary version on healthy subjects who also volunteered to wear a constrictive garment similar to an astronaut’s spacesuit. When the volunteers attempted to get up from a sitting or lying position, they did so with less effort when assisted by SuperLimbs, compared to when they had to recover on their own.

The MIT team envisions that SuperLimbs can physically assist astronauts after a fall and, in the process, help them conserve their energy for other essential tasks. The design could prove especially useful in the coming years, with the launch of NASA’s Artemis mission, which plans to send astronauts back to the moon for the first time in over 50 years. Unlike the largely exploratory mission of Apollo, Artemis astronauts will endeavor to build the first permanent moon base — a physically demanding task that will require multiple extended extravehicular activities (EVAs).

“During the Apollo era, when astronauts would fall, 80 percent of the time it was when they were doing excavation or some sort of job with a tool,” says team member and MIT doctoral student Erik Ballesteros. “The Artemis missions will really focus on construction and excavation, so the risk of falling is much higher. We think that SuperLimbs can help them recover so they can be more productive, and extend their EVAs.”

Asada, Ballesteros, and their colleagues presented their design and study at the IEEE International Conference on Robotics and Automation (ICRA). Their co-authors include MIT postdoc Sang-Yoep Lee and Kalind Carpenter of the Jet Propulsion Laboratory.

Taking a stand

The team’s design is the latest application of SuperLimbs, which Asada first developed about a decade ago and has since adapted for a range of applications, including assisting workers in aircraft manufacturing, construction, and ship building.

Most recently, Asada and Ballesteros wondered whether SuperLimbs might assist astronauts, particularly as NASA plans to send astronauts back to the surface of the moon.

a rendering of robotic limbs helping an astronaut stand up after falling down.

SuperLimbs, a system of wearable robotic limbs, is designed to lift up astronauts after they fall. Credit: MIT

“In communications with NASA, we learned that this issue of falling on the moon is a serious risk,” Asada says. “We realized that we could make some modifications to our design to help astronauts recover from falls and carry on with their work.”

The team first took a step back, to study the ways in which humans naturally recover from a fall. In their new study, they asked several healthy volunteers to attempt to stand upright after lying on their side, front, and back.

The researchers then looked at how the volunteers’ attempts to stand changed when their movements were constricted, similar to the way astronauts’ movements are limited by the bulk of their spacesuits. The team built a suit to mimic the stiffness of traditional spacesuits, and had volunteers don the suit before again attempting to stand up from various fallen positions. The volunteers’ sequence of movements was similar, though required much more effort compared to their unencumbered attempts.

The team mapped the movements of each volunteer as they stood up, and found that they each carried out a common sequence of motions, moving from one pose, or “waypoint,” to the next, in a predictable order.

“Those ergonomic experiments helped us to model in a straightforward way, how a human stands up,” Ballesteros says. “We could postulate that about 80 percent of humans stand up in a similar way. Then we designed a controller around that trajectory.”

SuperLimbs lend a helping hand

The team developed software to generate a trajectory for a robot, following a sequence that would help support a human and lift them back on their feet. They applied the controller to a heavy, fixed robotic arm, which they attached to a large backpack. The researchers then attached the backpack to the bulky suit and helped volunteers back into the suit. They asked the volunteers to again lie on their back, front, or side, and then had them attempt to stand as the robot sensed the person’s movements and adapted to help them to their feet.

Overall, the volunteers were able to stand stably with much less effort when assisted by the robot, compared to when they tried to stand alone while wearing the bulky suit.

“It feels kind of like an extra force moving with you,” says Ballesteros, who also tried out the suit and arm assist. “Imagine wearing a backpack and someone grabs the top and sort of pulls you up. Over time, it becomes sort of natural.”


SITE AD for the 2024 RoboBusiness registration now open.Register now.


The experiments confirmed that the control system can successfully direct a robot to help a person stand back up after a fall. The researchers plan to pair the control system with their latest version of SuperLimbs, which comprises two multi-jointed robotic arms that can extend out from a backpack. The backpack would also contain the robot’s battery and motors, along with an astronaut’s ventilation system.

“We designed these robotic arms based on an AI search and design optimization, to look for designs of classic robot manipulators with certain engineering constraints,” Ballesteros says. “We filtered through many designs and looked for the design that consumes the least amount of energy to lift a person up. This version of SuperLimbs is the product of that process.”

Over the summer, Ballesteros will build out the full SuperLimbs system at NASA’s Jet Propulsion Laboratory, where he plans to streamline the design and minimize the weight of its parts and motors using advanced, lightweight materials. Then, he hopes to pair the limbs with astronaut suits, and test them in low-gravity simulators, with the goal of someday assisting astronauts on future missions to the moon and Mars.

“Wearing a spacesuit can be a physical burden,” Asada notes. “Robotic systems can help ease that burden, and help astronauts be more productive during their missions.”

Editor’s Note: This article was republished from MIT News.

The post Robot ‘SuperLimbs’ help astronauts stand up after falling appeared first on The Robot Report.

]]>
https://www.therobotreport.com/robot-superlimbs-help-astronauts-stand-up-after-falling/feed/ 0
NVIDIA researchers show geometric fabric controllers for robots at ICRA https://www.therobotreport.com/nvidia-geometric-fabric-controllers-robot-deployment-icra/ https://www.therobotreport.com/nvidia-geometric-fabric-controllers-robot-deployment-icra/#respond Sun, 19 May 2024 12:01:38 +0000 https://www.therobotreport.com/?p=579108 NVIDIA teams presented their findings on geometric fabrics, among other robotics research, at ICRA in Japan.

The post NVIDIA researchers show geometric fabric controllers for robots at ICRA appeared first on The Robot Report.

]]>
NVIDIA researchers found they can vectorize controllers so they're available both during training and deployment. | Source: NVIDIA

Researchers reported at ICRA that they can vectorize controllers to be available during training and deployment. | Source: NVIDIA

NVIDIA Corp. research teams presented their findings at the IEEE International Conference on Robotics and Automation, or ICRA, last week in Yokohama, Japan. One group, in particular, presented research focusing on geometric fabrics, a popular topic at the event. 

In robotics, trained policies, like geometric fabrics, are approximate by nature. This means that while these policies usually do the right thing, sometimes they make a robot move too fast, collide with things, or jerk around. Generally, roboticists can not be certain of everything that might occur. 

To counteract this, these trained policies are always deployed with a layer of low-level controllers that intercept the commands from the policy. This is especially true when using reinforcement learning-trained policies on a physical robot, said the team at the NVIDIA Robotics Research Lab in Seattle. These controllers then translate the commands from the policy so they mitigate the limitations of the hardware. 

These controllers are run with reinforcement learning (RL) policies during the training phase. It was during this phase that the researchers found that a unique value could be supplied with the GPU-accelerated RL training tools. This value vectorizes those controllers so they’re available during training and deployment. 

Out in the real world, companies working on, say, humanoid robots can demonstrate with low-level controllers that balance the robot and keep it from running its arms into its own body.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Researchers draw on past work for current project 

The research team built on two previous NVIDIA projects for this current paper. The first was “Geometric Fabrics: Generalizing Classical Mechanics to Capture the Physics of Behavior,” which won a best paper award at last year’s ICRA. The Santa Clara, Calif.-based company‘s team used controllers produced in this project to vectorize. 

The in-hand manipulation tasks the researchers address in this year’s paper also come from a well-known line of research on DeXtreme. In this new work, the researchers merged those two lines of research to train DeXtreme policies over the top of vectorized geometric fabric controllers.

NVIDIA’s team said this keeps the robot safer, guides policy learning through the nominal fabric behavior, and systematizes simulation-to-reality (sim2real) training and deployment to get one step closer to using RL tooling in production settings. 

From this, the researchers formed a foundational infrastructure that enabled them to quickly iterate to get the domain randomization right during training. This sets them up for successful sim2real deployment. 

For example, by iterating quickly between training and deployment, the team reported that it could adjust the fabric structure and add substantial random perturbation forces during training to achieve a higher level or robustness than in previous work. 

In prior DeXtreme work, the real-world experiments were extremely hard on the physical robot. It wore down the motors and sensors while changing the behavior of underlying control through the course of experimentation.

At one point, the robot even broke down and started smoking. With geometric fabric controllers underlying the policy and protecting the robot, the researchers found they could be much more liberal in deploying and testing policies without worrying about the robot destroying itself. 

NVIDIA presents more research at ICRA

NVIDIA highlighted four other papers its researchers submitted to ICRA this year. They are: 

  • SynH2R: The researchers behind this paper proposed a framework to generate realistic human grasping motions that can be used for training a robot. With the method, the team could generate synthetic training and testing data with 100 times more objects than previous work. The team said its method is competitive with state-of-the-art methods that rely on real human motion data both in simulation and on a real system.
  • Out of Sight, Still in Mind: In this paper, NVIDIA’s researchers tested a robotic arm’s reaction to things it had previously seen but were then occluded. With the team’s approaches, robots can perform multiple challenging tasks, including reasoning with occluded objects, novel objects in appearance, and object reappearance. The company claimed that these approaches outperformed implicit memory baselines. 
  • Point Cloud World Models: The researchers set up a novel point cloud world model and point cloud-based control policies that were able to improve performance, reduce learning time, and increase robustness for robotic learners. 
  • SKT-Hang: This team looked at the problem of how to use a robot to hang up a wide variety of objects on different supporting structures. This is a deceptively tricky problem, as there are countless variations in both the shape of objects and the supporting structure poses.

Surgical simulation uses Omniverse

NVIDIA also presented ORBIT-Surgical, a physics-based surgical robot simulation framework with photorealistic rendering powered by NVIDIA Isaac Sim on the NVIDIA Omniverse platform. It uses GPU parallelization to facilitate the study of robot learning to augment human surgical skills.

The framework also enables realistic synthetic data generation for active perception tasks. The researchers demonstrated ORBIT-Surgical sim2real transfer of learned policies onto a physical dVRK robot. They plan to release the underlying simulation application as a free, open-source package upon publication. 

In addition, the DefGoalNet paper focuses on shape servoing, a robotic task dedicated to controlling objects to create a specific goal shape.

Partners present their developments at ICRA

NVIDIA partners also showed their latest developments at ICRA. ANYbotics presented a complete software package to grant users access to low-level controls down to the Robot Operating System (ROS).

Franka Robotics highlighted its work with NVIDIA Isaac Manipulator, an NVIDIA Jetson-based AI companion to power robot control and the Franka toolbox for Matlab. Enchanted Tools exhibited its Jetson-powered Mirokaï robots.

NVIDIA recently participated in the Robotics Summit & Expo in Boston and the opening of Teradyne Robotics’ new headquarters in Odense, Denmark.

NVIDIA partner Enchanted Tools showed Miroki at ICRA.

NVIDIA partner Enchanted Tools showed Mirokai at CES and ICRA. Source: Enhanted Tools

The post NVIDIA researchers show geometric fabric controllers for robots at ICRA appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-geometric-fabric-controllers-robot-deployment-icra/feed/ 0
Researchers build microrobots to remove microplastics from water https://www.therobotreport.com/researchers-build-microrobots-to-remove-microplastics-from-water/ https://www.therobotreport.com/researchers-build-microrobots-to-remove-microplastics-from-water/#respond Sun, 12 May 2024 14:10:26 +0000 https://www.therobotreport.com/?p=579013 When old food packaging, discarded children’s toys and other mismanaged plastic waste break down into microplastics, they become even harder to clean up from oceans and waterways. Researchers are turning to microrobots for help.

The post Researchers build microrobots to remove microplastics from water appeared first on The Robot Report.

]]>

When old food packaging, discarded children’s toys and other mismanaged plastic waste break down into microplastics, they become even harder to clean up from oceans and waterways. These tiny bits of plastic also attract bacteria, including those that cause disease. Researchers are turning to microrobots for help.

In a study in ACS Nano, a publication by the American Chemical Society, researchers describe swarms of microscale robots (microrobots) that captured bits of plastic and bacteria from water. Afterward, the microrobots were decontaminated and reused. You can watch a video of them swarming atop the page.

The size of microplastics, which measure 5 millimeters or less, adds another dimension to the plastic pollution problem because animals can eat them, potentially being harmed or passing the particles into the food chain that ends with humans. So far, the health effects for people are not fully understood. However, microplastics themselves aren’t the only concern.

These pieces attract bacteria, including pathogens, which can also be ingested. To remove microbes and plastic from water simultaneously, Martin Pumera and colleagues turned to microscale robotic systems, comprised of many small components that work collaboratively, mimicking natural swarms, like schools of fish.

microrobots are being tested to clean up microplastics

A microscope image shows the microrobots (yellow) along with trapped bacteria (green) and tiny pieces of plastic (white). | Credit: ACS Nano

To construct the microrobots, the team linked strands of a positively charged polymer to magnetic microparticles, which only move when exposed to a magnetic field. The polymer strands, which radiate from the surface of the beads, attract both plastics and microbes. And the finished products — the individual robots — measured 2.8 micrometers in diameter. When exposed to a rotating magnetic field, the robots swarmed together. By adjusting the number of robots that self-organized into flat clusters, the researchers found that they could alter the swarm’s movement and speed.

In lab experiments, the team replicated microplastics and bacteria in the environment by adding fluorescent polystyrene beads (1 micrometer-wide) and actively swimming Pseudomonas aeruginosa bacteria, which can cause pneumonia and other infections, to a water tank. Next, the researchers added microrobots to the tank and exposed them to a rotating magnetic field for 30 minutes, switching it on and off every 10 seconds.

A robot concentration of 7.5 milligrams per milliliter, the densest of four concentrations tested, captured approximately 80% of the bacteria. Meanwhile, at this same concentration, the number of free plastic beads also gradually dropped, as they were drawn to the microrobots.

Afterward, the researchers collected the robots with a permanent magnet and used ultrasound to detach the bacteria clinging to them. They then exposed the removed microbes to ultraviolet radiation, completing the disinfection. When reused, the decontaminated robots still picked up plastic and microbes, albeit smaller amounts of both.

This microrobotic system provides a promising approach for ridding water of plastic and bacteria, the researchers said.

The post Researchers build microrobots to remove microplastics from water appeared first on The Robot Report.

]]>
https://www.therobotreport.com/researchers-build-microrobots-to-remove-microplastics-from-water/feed/ 0
SpaceHopper robot wants to hop along small celestial bodies https://www.therobotreport.com/spacehopper-robot-wants-to-hop-along-small-celestial-bodies/ https://www.therobotreport.com/spacehopper-robot-wants-to-hop-along-small-celestial-bodies/#respond Sat, 11 May 2024 14:30:58 +0000 https://www.therobotreport.com/?p=578997 Developed at ETH Zurich, SpaceHopper is designed to use its three legs to hop along asteroids or far away moons to search for rare minerals.

The post SpaceHopper robot wants to hop along small celestial bodies appeared first on The Robot Report.

]]>

ETH Zurich students are developing a robot that can navigate low-gravity environments using a jumping-like mode of locomotion. The team recently tested the SpaceHopper robot in zero gravity scenarios on a European Space Agency parabolic flight.

The team hopes the SpaceHopper robot will be deployed on space missions and used to explore relatively small celestial bodies, like asteroids and moons. These bodies could contain valuable mineral resources that are rare on Earth. These bodies could also give us clues into our universe’s formation.

While they can provide great insights, these small celestial bodies are difficult to explore. One reason is because of their low gravity, contrasting to larger bodies like Earth. The researchers behind SpaceHopper designed it to show that legged robots are capable of this difficult task.

ETH launched the project 2.5 years ago as a focus project for Bachelor’s degree students. Now, it’s continuing it as a regular research project conducted by five Master’s degree students and one doctoral student.

About the SpaceHopper robot

SpaceHopper looks like a triangular prism with three legs sprouting from its corners. Each leg has three degrees of freedom. The team says this makes locomotion easier, as it lacks a preferred orientation. The lightweight robot has a unique way of getting around.

The SpaceHopper team breaks down the robot’s locomotion method into six movement capabilities that ensure reliable and fast travel on an asteroid. They are hopping to traverse large distances, attitude control during flight, controlled landing at a target point, precise short-distance locomotion, the ability to carry a scientific payload, and self-righting after landing.

In other words, the robot uses the power of all nine motors in its legs to take a stronger and precise takeoff. This allows the robot to cover large distances or hop over obstacles. While in the air, the robot uses its legs to reorient itself without any flywheels, allowing it to always land on its feet. When it does touch the ground, SpaceHopper’s feet give it a soft landing without any uncontrolled bouncing.

With the help of the European Space Agency’s (ESA) Petri Program, which offers practical experience and training to complement University work, the team is conducting a Parabolic Flight Testing Campaign. Parabolic flights, or “zero-gravity flights,” use special planes and roller-coaster-like maneuvers to create moments of weightlessness. In these moments, usually 20-30 seconds, the team can test SpaceHopper in space-like conditions. 

So far, SpaceHopper has demonstrated the ability to reorient itself using only its legs, and has shown some jumping capabilities in this environment.

The post SpaceHopper robot wants to hop along small celestial bodies appeared first on The Robot Report.

]]>
https://www.therobotreport.com/spacehopper-robot-wants-to-hop-along-small-celestial-bodies/feed/ 0
Seoul National University wins MassRobotics Form & Function Challenge https://www.therobotreport.com/seoul-national-university-wins-massrobotics-form-function-challenge/ https://www.therobotreport.com/seoul-national-university-wins-massrobotics-form-function-challenge/#respond Wed, 08 May 2024 19:29:01 +0000 https://www.therobotreport.com/?p=578977 Twelve teams from across the globe showcased their robotics and automation projects for the challenge and competed for cash prizes.

The post Seoul National University wins MassRobotics Form & Function Challenge appeared first on The Robot Report.

]]>
The Seoul National University team with their winning system.

The Seoul National University team with its winning system. | Source: MassRobotics

MassRobotics announced the winners of its second Form & Function University Robotics Challenge at the Robotics Summit & Expo last week. A panel of judges at the show picked Seoul National University as the first-place winner. 

Twelve teams from across the globe showcased their robotics and automation projects for the challenge and competed for cash prizes. Competitors in the challenge are tasked with creating a robot that looks good, the “form” part, and that works, the “function” part. 

The Seoul National University’s team built a deployable gantry system for the challenge. Visitors at the show could watch the system 3D print on the concrete floor of the convention center. You can check out the team’s project flyer here (PDF).

The judges awarded the Harvard University team second place with its Hydrocube capable of moving particles in liquid without touching them. The Wentworth Institute of Technology’s team won third place with its underwater inspection robot. 

The University of British Columbia team won this year’s Audience Choice award. The team created a robot that can detect and monitor embers after wildfires. 

MassRobotics says it leaves the challenges tasks intentionally vague to encourage creativity and innovation from the teams that compete. MassRobotics partners AMD, Analog Devices, Danfoss, Festo, Lattice Semiconductor, Mitsubishi Electric, Novanta, Solidworks, and igus donated the components and software used in the challenge. This allowed the teams to utilize some of the latest and greatest offerings in the industry. 

About the winning team 

The Seoul National University’s team was made up of Sun-Pill Jung, Jaeyoung Song, Chan Kim, Haemin Lee, Inchul Jeong, and Kyu Jin Cho. The team set out to build a highly rigid extendable boom using a corrugated structure for a deployable mobile gantry robot system. 

The team wanted to address future space and transportation issues. While NASA has early plans for taking the first steps towards creating structures on celestial bodies, it’s still a relatively unexplored area of innovation. 

The Seoul National University team created a deployable mobile gantry robot system with a hang 3D printer. The structure uses Slide-and-Fold Enabling (SaFE) joints to create extendable legs. This allows the team to use the 3D printer to create a range of objects of different shapes and sizes. 

The post Seoul National University wins MassRobotics Form & Function Challenge appeared first on The Robot Report.

]]>
https://www.therobotreport.com/seoul-national-university-wins-massrobotics-form-function-challenge/feed/ 0
U.S. manufacturers invested heavily in robotics in 2023, finds IFR https://www.therobotreport.com/us-manufacturers-invested-heavily-robotics-2023-finds-ifr/ https://www.therobotreport.com/us-manufacturers-invested-heavily-robotics-2023-finds-ifr/#respond Tue, 30 Apr 2024 13:13:40 +0000 https://www.therobotreport.com/?p=578904 Robot installations by U.S. manufacturers, particularly in automotive, climbed 12%, according to the IFR's preliminary findings.

The post U.S. manufacturers invested heavily in robotics in 2023, finds IFR appeared first on The Robot Report.

]]>
U.S. manufacturers continued to invest in automation, says the IFR.

U.S. manufacturers have increasingly adopted automation, says the IFR.

Total installations of industrial robots rose by 12% and reached 44,303 units in 2023, as U.S. manufacturers invested heavily in more automation, reported the International Federation of Robotics, or IFR, today. The automotive industry is still the No. 1 adopter, followed by the electrical and electronics sector, according to the IFR’s preliminary results.

“The United States has one of the most advanced manufacturing industries worldwide,” stated Marina Bill, president of the IFR. “The first IFR outlook on preliminary results shows again strong robotics demand across all major segments of U.S. manufacturing in 2023.”

U.S. automakers still lead

Sales in the automotive segment rose by 1%, with a record 14,678 robots installed in 2023, said the IFR. This comes after installations in 2022 skyrocketed by 47%, reaching 14,472 units, noted the Frankfurt, Germany-based organization.

The market share of car and component makers reached 33% of all industrial robot installations in the U.S. in 2023. The U.S. has the second-largest production volume of cars and light vehicles worldwide after China.

“Automotive manufacturers currently invest in robotics mainly to drive the electric vehicle transition and respond to labor shortages,” Bill said.

Electrical and electronics industry adopts robots

Installations in the electrical and electronics industry rose by 37% to 5,120 units in 2023, said the IFR. This number almost reached the record pre-pandemic level of 5,284 units, seen in 2018.

The latest result represents a market share of 12% of all industrial robots installed in the U.S. manufacturing industry. Global installations reached record numbers in 2022, the IFR noted.

It attributed recent demand for industrial robots among U.S. electronics makers by efforts to strengthen domestic supply chains and projects toward clean-energy transitions.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Demand strong among other U.S. manufacturers

Installation counts in other U.S. industries exceeding the 3,000-unit mark included metal and machinery (4,123 units, +6%) and plastic and chemical products (3,213 units, +5%).

They represent a market share of 9% and 7% of U.S. manufacturer robot installations in 2023, respectively, said the IFR.

Canada and Mexico also climb

Robot installation in Canada reached 4,616 units – up 43%. The automotive industry accounts for 55% of the country´s robot installations. Sales to the automotive sector rose by 99% with 2,549 units installed in 2023. This is an all-time high.

Robot installations in Mexico’s manufacturing industry remained almost unchanged, with 5,868 units in 2023. The country´s main adopter was the automotive industry, which accounted for 69% of the robot installations in 2023

The IFR said sales in Mexico reached 4,068 units (-0%) in 2023 – the third best result since the peak level of 4,805 units, in 2017.

IFR to release more results

The IFR plans to post the presentation on preliminary figures held by Bill during the IFR Executive Roundtable on May 8. The federation said it will release final results of the latest World Robotics data on Sept. 24.

The organization will also be present at Booth 2790 at Automate in Chicago next week.

The post U.S. manufacturers invested heavily in robotics in 2023, finds IFR appeared first on The Robot Report.

]]>
https://www.therobotreport.com/us-manufacturers-invested-heavily-robotics-2023-finds-ifr/feed/ 0