Sensors, machine vision, and feedback for robotic designs https://www.therobotreport.com/category/technologies/cameras-imaging-vision/ Robotics news, research and analysis Fri, 31 May 2024 21:14:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Sensors, machine vision, and feedback for robotic designs https://www.therobotreport.com/category/technologies/cameras-imaging-vision/ 32 32 Foresight to collaborate with KONEC on autonomous vehicle concept https://www.therobotreport.com/foresight-collaborates-with-konec-autonomous-vehicle-concept/ https://www.therobotreport.com/foresight-collaborates-with-konec-autonomous-vehicle-concept/#respond Mon, 03 Jun 2024 12:30:02 +0000 https://www.therobotreport.com/?p=579242 Foresight will integrate its ScaleCam 3D perception technology with KONEC into a conceptual autonomous driving vehicle. 

The post Foresight to collaborate with KONEC on autonomous vehicle concept appeared first on The Robot Report.

]]>
Two Foresight branded cameras on top of a white car.

Foresight says its ScaleCam system can generate high-quality depth maps. | Source: Foresight

Foresight Autonomous Holdings Ltd. last week announced that it has signed a co-development agreement with KONEC Co., a Korean Tier 1 automotive supplier. Under the agreement, the companies will integrate Foresight’s ScaleCam 3D perception technology into a concept autonomous vehicle. 

The collaboration is sponsored by the Foundation of Korea Automotive Parts Industry Promotion (KAP), founded by Hyundai Motor Group. The partners said they will combine KONEC’s expertise in developing advanced automotive systems with KAP’s mission to foster innovation within the automobile parts industry. 

“We believe that the collaboration with KONEC represents a significant step forward in the development of next-generation autonomous driving solutions,” stated Haim Siboni, CEO of Foresight. “By combining our resources, image-processing expertise, and innovative technologies, we aim to accelerate the development and deployment of autonomous vehicles, ultimately contributing to safer transportation solutions in the Republic of Korea.” 

Foresight is an innovator in automotive vision systems. The Ness Ziona, Israel-based company is developing smart multi-spectral vision software systems and cellular-based applications. Through its subsidiaries, Foresight Automotive Ltd., Foresight Changzhou Automotive Ltd., and Eye-Net Mobile Ltd., it develops both in-line-of-sight vision systems and beyond-line-of-sight accident-prevention systems. 

KONEC has established a batch production system for lightweight metal raw materials, models, castings, processing, and assembly through cooperation among its group affiliates. The Seosan-si, South Korea-based company‘s major customers include Tesla, Hyundai Motor, and Kia.

KONEC has entered the field of information processing technology using cameras to perform tasks such as developing a license-plate recognition system with companies that have commercialized systems on chips (SoCs) and modules for Internet of Things (IoT) communication. 


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Foresight ScaleCam to enhance autonomous capabilities 

The collaboration will incorporate Foresight’s ScaleCam 360º 3D perception technology. The company said it will enable the self-driving vehicle to accurately perceive its surroundings. It and KONEC said say the successful integration of ScaleCam could significantly enhance the capabilities and safety of autonomous vehicles. 

ScaleCam is based on stereoscopic technology. The system uses advanced and proven image-processing algorithms, according to Foresight. The company claimed that it provides seamless vision by using two visible-light cameras for highly accurate and reliable obstacle-detection capabilities. 

Typical stereoscopic vision systems require constant calibration to ensure accurate distance measurements, Foresight noted. To solve this, some developers mount stereo cameras on a fixed beam, but this can limit camera placement positions and lead to technical issues, it said.

Foresight asserted that its technology allows for the independent placement of both visible-light and thermal infrared camera modules. This allows the system to support large baselines without mechanical constraints, providing greater distance accuracy at long ranges, it said. 

The post Foresight to collaborate with KONEC on autonomous vehicle concept appeared first on The Robot Report.

]]>
https://www.therobotreport.com/foresight-collaborates-with-konec-autonomous-vehicle-concept/feed/ 0
Stanford researcher discusses UMI gripper and diffusion AI models https://www.therobotreport.com/interview-with-chung-chi-about-the-umi-gripper-and-diffusion-ai-models/ https://www.therobotreport.com/interview-with-chung-chi-about-the-umi-gripper-and-diffusion-ai-models/#respond Sat, 25 May 2024 14:30:46 +0000 https://www.therobotreport.com/?p=579086 Stanford Ph.D. researcher Cheng Chi discusses the development of the UMI gripper and the use of diffusion AI models for robotics.

The post Stanford researcher discusses UMI gripper and diffusion AI models appeared first on The Robot Report.

]]>

The Robot Report recently spoke with Ph.D. student Cheng Chi about his research at Stanford University and recent publications about using diffusion AI models for robotics applications. He also discussed the recent universal manipulation interface, or UMI gripper, project, which demonstrates the capabilities of diffusion model robotics.

The UMI gripper was part of his Ph.D. thesis work, and he has open-sourced the gripper design and all of the code so that others can continue to help evolve the AI diffusion policy work.

AI innovation accelerates

How did you get your start in robotics?

headshot of Cheng Chi.

Stanford researcher Cheng Chi. | Credit: Huy Ha

I worked in the robotics industry for a while, starting at the autonomous vehicle company Nuro, where I was doing localization and mapping.

And then I applied for my Ph.D. program and ended up with my advisor Shuran Song. We were both at Columbia University when I started my Ph.D., and then last year, she moved to Stanford to become full-time faculty, and I moved [to Stanford] with her.

For my Ph.D. research, I started as a classical robotics researcher, and I started working with machine learning, specifically for perception. Then in early 2022, diffusion models started to work for image generation, that’s when DALL-E 2 came out, and that’s also when Stable Diffusion came out.

I realized the specific ways which diffusion models could be formulated to solve a couple of really big problems for robotics, in terms of end-to-end learning and in the actual representation for robotics.

So, I wrote one of the first papers that brought the diffusion model into robotics, which is called diffusion policy. That’s my paper for my previous project before the UMI project. And I think that’s the foundation of why the UMI gripper works. There’s a paradigm shift happening, my project was one of them, but there are also other robotics research projects that are also starting to work.

A lot has changed in the past few years. Is artificial intelligence innovation is accelerating?

Yes, exactly. I experienced it firsthand in academia. Imitation learning was the dumbest thing possible you could do for machine learning with robotics. It’s like, you teleoperate the robot to collect data, the data is paired with images and the corresponding actions.

In class, we’re taught that people proved that in this paradigm of imitation learning or behavior, cloning doesn’t work. People proved that errors grow exponentially. And that’s why you need reinforcement learning and all the other methods that can address these limitations.

But fortunately, I wasn’t paying too much attention in class. So I just went to the lab and tried it, and it worked surprisingly well. I wrote the code, I applied the diffusion model to this and for my first task; it just worked. I said, “That’s too easy. That’s not worth a paper.”

I kept adding more tasks like online benchmarks, trying to break the algorithm so that I could find a smart angle that I could improve on this dumb idea that would give me a paper, but I just kept adding more and more things, and it just refused to break.

So there are simulation benchmarks online. I used four different benchmarks and just tried to find an angle to break it so that I could write a better paper, but it just didn’t break. Our baseline performance was 50% to 60%. And after applying the diffusion model to that, it was like 95%. So it was a jump in terms of these. And that’s the moment I realized, maybe there’s something big happening here.

UR5 cobot push a "T" around a table.

The first diffusion policy research at Columbia was to push a T into position on a table. | Credit: Cheng Chi

How did those findings lead to published research?

That summer, I interned at Toyota Research Institute, and that’s where I started doing real-world experiments using a UR5 [cobot] to push a block into a location. It turned out that this worked really well on the first try.

Normally, you need a lot of tuning to get something to work. But this was different. When I tried to perturb the system, it just kept pushing it back to its original place.

And so that paper got published, and I think that’s my proudest work, I made the paper open-source, and I open-sourced all the code because the results were so good, I was worried that people were not going to believe it. As it turned out, it’s not a coincidence, and other people can reproduce my results and also get very good performance.

I realized that now there’s a paradigm shift. Before [this UMI Gripper research], I needed to engineer a separate perception system, planning system, and then a control system. But now I can combine all of them with a single neural network.

The most important thing is that it’s agnostic to tasks. With the same robot, I can just collect a different data set and train a model with a different data set, and it will just do the different tasks.

Obviously, collecting the data set part is painful, as I need to do it 100 to 300 times for one environment to get it to work. But in actuality, it’s maybe one afternoon’s worth of work. Compared to tuning a sim-to-real transfer algorithm takes me a few months, so this is a big improvement.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


UMI Gripper training ‘all about the data’

When you’re training the system for the UMI Gripper, you’re just using the vision feedback and nothing else?

Just the cameras and the end effector pose of the robot — that’s it. We had two cameras: one side camera that was mounted onto the table, and the other one on the wrist.

That was the original algorithm at the time, and I could change to another task and use the same algorithm, and it would just work. This was a big, big difference. Previously, we could only afford one or two tasks per paper because it was so time-consuming to set up a new task.

But with this paradigm, I can pump out a new task in a few days. It’s a really big difference. That’s also the moment I realized that the key trend is that it’s all about data now. I realized after training more tasks, that my code hadn’t been changed for a few months.

The only thing that changed was the data, and whenever the robot doesn’t work, it’s not the code, it’s the data. So when I just add more data, it works better.

And that prompted me to think, that we are into this paradigm of other AI fields as well. For example, large language models and vision models started with a small data regime in 2015, but now with a huge amount of internet data, it works like magic.

The algorithm doesn’t change that much. The only thing that changed is the scale of training, and maybe the size of the models, and makes me feel like maybe robotics is about to enter that that regime soon.

two UR cobots fold a shirt using UMI gripper.

Two UR cobots equipped with UMI grippers demonstrate the folding of a shirt. | Credit: Cheng Chi video

Can these different AI models be stacked like Lego building blocks to build more sophisticated systems?

I believe in big models, but I think they might not be the same thing as you imagine, like Lego blocks. I suspect that the way you build AI for robotics will be that you take whatever tasks you want to do, you collect a whole bunch of data for the task, run that through a model, and then you get something you can use.

If you have a whole bunch of these different types of data sets, you can combine them, to train an even bigger model. You can call that a foundation model, and you can adapt it to whatever use case. You’re using data, not building blocks, and not code. That’s my expectation of how this will evolve.

But simultaneously, there’s a there’s a problem here. I think the robotics industry was tailored toward the assumption that robots are precise, repeatable, and predictable. But they’re not adaptable. So the entire robotics industry is geared towards vertical end-use cases optimized for these properties.

Whereas robots powered by AI will have different sets of properties, and they won’t be good at being precise. They won’t be good at being reliable, they won’t be good at being repeatable. But they will be good at generalizing to unseen environments. So you need to find specific use cases where it’s okay if you fail maybe 0.1% of the time.

Safety versus generalization

Robots in industry must be safe 100% of the time. What do you think the solution is to this requirement?

I think if you want to deploy robots in use cases where safety is critical, you either need to have a classical system or a shell that protects the AI system so that it guarantees that when something bad happens, at least there’s a worst-case scenario to make sure that something bad doesn’t actually happen.

Or you design the hardware such that the hardware is [inherently] safe. Hardware is simple. Industrial robots for example don’t rely that much on perception. They have expensive motors, gearboxes, and harmonic drives to make a really precise and very stiff mechanism.

When you have a robot with a camera, it is very easy to implement vision servoing and make adjustments for imprecise robots. So robots don’t have to be precise anymore. Compliance can be built into the robot mechanism itself, and this can make it safer. But all of this depends on finding the verticals and use cases where these properties are acceptable.

The post Stanford researcher discusses UMI gripper and diffusion AI models appeared first on The Robot Report.

]]>
https://www.therobotreport.com/interview-with-chung-chi-about-the-umi-gripper-and-diffusion-ai-models/feed/ 0
Two-armed InductOne from Plus One Robotics designed for parcel induction https://www.therobotreport.com/two-armed-inductone-from-plus-one-robotics-designed-for-parcel-induction/ https://www.therobotreport.com/two-armed-inductone-from-plus-one-robotics-designed-for-parcel-induction/#comments Wed, 08 May 2024 12:01:19 +0000 https://www.therobotreport.com/?p=578958 InductOne builds on Plus One Robotics' parcel picking experience and trusted relationships with customers to increase throughput.

The post Two-armed InductOne from Plus One Robotics designed for parcel induction appeared first on The Robot Report.

]]>
Plus One Robotics designed InductOne to handle a wide range of parcels.

InductOne is designed to handle a wide range of parcels with high throughput. Source: Plus One Robotics

CHICAGO — As items pass through warehouses and other facilities, they are often handled as packages rather than as eaches. Plus One Robotics yesterday launched InductOne, a two-armed robot designed to optimize parcel singulation and induction in high-volume fulfillment and distribution centers.

“Parcel variability is a significant challenge of automation within the warehouse,” stated Erik Nieves, CEO of Plus One Robotics. “That’s why InductOne is equipped with our innovative individual cup control [ICC] gripper, which can precisely handle a wide range of parcel sizes and shapes.”

“But it’s not just about what InductOne picks, it’s also about what it doesn’t pick,” he added. “The system avoids picking non-conveyable items, allowing them to automatically convey to a designated exception path and preventing the robots from wasting precious cycles handling items which should not be inducted.”

“We’ve doubled down on parcel handling; we’re not an each picking company,” Nieves told The Robot Report. “Vision and grasping for materials handling is hard, and Plus One continues to focus on packaged goods, which spend most of their time as parcels and can be picked by vacuum grippers.”

InductOne engineered for ease of integration, efficiency

InductOne’s dual-arm design “significantly outperforms single-arm solutions,” claimed Plus One. “While a single-arm system typically tops out at around 1,600 picks per hour, the coordinated motion of InductOne’s two arms can achieve sustained pick rates of 2,200 to 2,300 per hour. InductOne’s peak rate maxes out at a rate of 3,300 picks per hour, 10% faster than the leading competition.”

Plus One Robotics said its engineering team designed InductOne to be capable but as small as possible for easy integration into brownfield facilities and to minimize the need for costly site modifications.

“The engineering approach behind InductOne has been focused on efficiency and flexibility,” said Nieves. “We designed the system to be as compact and lightweight as possible, making it easier to deploy in limited spaces, including on existing mezzanines. The modular and configurable nature of InductOne also allows it to seamlessly integrate into a variety of fulfillment center layouts.”

InductOne includes the PickOne vision system, the Yonder remote supervision software, pick-and-place conveyors, integrated safety features, analytics, and training and ongoing support. The modular system is also offers configurable layouts for cross-belt or tray sorters.

It can handle parcels weighing up to 15 lb. (6.8 kg) and up to 27 in. (69 cm) in length, 19 in. (48 cm) in width, and 17 in. (43 cm) in height. InductOne supports boxes, clear and opaque polybags, shipping envelopes, and padded and paper mailers.

InductOne includes Plus One's vision and grasping technologies.

InductOne includes vision and grasping refined by millions of picks of experience. Source: Plus One Robotics

Plus One Robotics touts experience and trust

Founded in 2016 by computer vision and robotics industry experts, Plus One Robotics said it combines computer vision, artificial intelligence, and supervised autonomy to pick parcels for leading logistics and e-commerce organizations. The San Antonio, Texas-based company has offices in Boulder, Colo., and the Netherlands.

Plus One said it applied its experience from more than 1 billion picks to develop InductOne. The company said it has learned from handling over 1 million picks per day, and achieving the reliability required for such high-volume operations led to its new parcel-handling machine.

“ICC and InductOne are the culmination of our learnings from these picks,” said Nieves at Automate. “It’s not just vision but also grasping and conveyance. I push back against those who say, ‘Data, data, data,” because we also need to appreciate the things above and below our system. We’re applying our expertise to the problem, but we’re not trying to be an integrator or just a hardware maker.”

Nieves noted that the value of robotics-as-a-service (RaaS) models is not recurring payments but the option they give users to scale deployments up or down as needed.

“As Pitney Bowes’ Stephanie Cannon said in a panel, it’s important [for automation providers] to get to 70% confidence and then work with customers who trust that you’ll work with them to get the rest of the way,” he said. “Our relationships with FedEx and Home Depot for palletization and parcel handling are built on that trust.”

The post Two-armed InductOne from Plus One Robotics designed for parcel induction appeared first on The Robot Report.

]]>
https://www.therobotreport.com/two-armed-inductone-from-plus-one-robotics-designed-for-parcel-induction/feed/ 1
Forcen closes funding to develop ‘superhuman’ robotic manipulation https://www.therobotreport.com/forcen-closes-funding-to-develop-superhuman-robotic-manipulation/ https://www.therobotreport.com/forcen-closes-funding-to-develop-superhuman-robotic-manipulation/#respond Sun, 28 Apr 2024 12:30:18 +0000 https://www.therobotreport.com/?p=578877 Forcen is offering customized and off-the-shelf sensors to aid robotic manipulation in complex environments.

The post Forcen closes funding to develop ‘superhuman’ robotic manipulation appeared first on The Robot Report.

]]>
Forcen has raised funding to scale production of its force/torque sensors.

Forcen says its technology will help robotic manipulation advance as vision has. Source: Forcen

Forcen last week said it has closed a funding round of CAD $8.35 million ($6.1 million U.S.). The Toronto-based company plans to use the investment to scale up production to support more customers and to continue developing its force/torque sensing technology and edge intelligence.

“We’ve been focused on delivering custom solutions showcasing our world-first technology with world-class quality … and we’re excited for our customers to announce the robots they’ve been working on with our technology,” stated Robert Brooks, founder and CEO of Forcen. “Providing custom solutions has limited the number of customers we take on, but now we’re working to change that.”

Founded in 2015, Forcen said its goal is to enable businesses to easily deploy “(super)human” robotic manipulation in complex and unstructured applications. The company added that its technology is already moving into production with customers in surgical, logistics, humanoid, and space robotics.

Forcen offers two paths to robot manipulation

Forcen said its new customizable offering and off-the-shelf development kits will accelerate development for current customers and help new ones adopt its technology.

The rapidly customizable offering will use generative design and standard subassemblies, noted the company. This will allow customers to select the size, sensing range/sensitivity, overload protection, mounting bolt pattern, and connector type/location.

By fulfilling orders in as little as four to six weeks, Forcen claimed that it can replace the traditional lengthy catalog of sensors, so customers can get exactly what they need for their unique applications.

The company will launch its off-the-shelf development kits later this year. They will cover three degree-of-freedom (DoF) and 6 DoF force/torque sensors, as well as Forcen’s cross-roller, bearing-free 3 DoF joint torque sensor and 3 DoF gripper finger.

Forcen's off-the-shelf development kits.

Off-the-shelf development kits will support different degrees of freedom. Source: Forcen

Force/torque sensors designed for complex applications

Complex and less-structured robotics applications are challenging for conventional force/torque sensing technologies because of the risk of repeated impact/overload, wide temperature ranges/changes, and extreme constraints on size and weight, explained Forcen. These applications are becoming increasingly common in surgical, logistics, agricultural/food, and underwater robotics.

Forcen added that its “full-stack” sensing systems are designed for such applications using three core proprietary technologies:

  • ForceFilm — A monolithic thin-film transducer enabling sensing systems that are lighter, thinner, more stable across both drift and temperature, the company said. It is especially scalable for multi-dimensional sensing, Forcen said.
  • Dedicated Overload — A protection structure that acts as a 6 DoF hard stop. The company said it allows sensitivity and overload protection to be designed separately and enables durable use of the overload structure for thousands of overload events while still achieving millions of sensing cycles.
  • Synap — Forcen’s onboard edge intelligence comes factory compensated/calibrated and can connect to any standard digital bus (USB, CAN, Ethernet, EtherCAT). This can “create a full-stack force/torque sensing solution that is truly plug-and-play with a maintenance/calibration-free operation.
Forcen sensors include three proprietary technologies.

New offerings include features to support demanding robotics applications. Source: Forcen

Learn about Forcen at the Robotics Summit

Brightspark Ventures and BDC Capital’s Deep Tech Venture Fund co-led Forcen’s funding round, with participation from Garage Capital and MaRS IAF, as well as returning investors including EmergingVC.

“Robotic vision has undergone a revolution over the past decade and is continuing to accelerate with new AI approaches,” said Mark Skapinker, co-founder and partner at Brightspark Ventures. “We expect robotic manipulation to quickly follow in the footsteps of robotic vision and Forcen’s technology to be a key enabler of ubiquitous human-level robotic manipulation.”

Forcen is returning to the Robotics Summit & Expo this week. It will have live demonstrations of its latest technology in Booth 113 at the Boston Convention and Exhibition Center. 

CEO Brooks will be talking on May 1 at 4:15 p.m. EDT about “Designing (Super)Human-Level Haptic Sensing for Surgical Robotics.” Registration is now open for the event, which is co-located with DeviceTalks Boston and the Digital Transformation Forum.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


The post Forcen closes funding to develop ‘superhuman’ robotic manipulation appeared first on The Robot Report.

]]>
https://www.therobotreport.com/forcen-closes-funding-to-develop-superhuman-robotic-manipulation/feed/ 0
Bota Systems to showcase its latest sensors at Robotics Summit https://www.therobotreport.com/bota-systems-to-showcase-its-latest-sensors-at-robotics-summit/ https://www.therobotreport.com/bota-systems-to-showcase-its-latest-sensors-at-robotics-summit/#respond Thu, 18 Apr 2024 18:48:10 +0000 https://www.therobotreport.com/?p=578759 Bota Systems will be at Booth 315 on the show floor at the Robotics Summit & Expo, which takes place on May 1 and 2, 2024. 

The post Bota Systems to showcase its latest sensors at Robotics Summit appeared first on The Robot Report.

]]>
Bota Systems.

Bota offers sensor solutions intended to allow robots to work and move safely. | Source: Bota Systems

Bota Systems will exhibit its recently unveiled sensors featuring a through-hole flange design and enhanced cable management at the Robotics Summit & Expo. The company can be found in Booth 315 on the event’s show floor.

“During the Robotics Summit, we will showcase our complete range of sensors at our booth, and we invite you to experience these sensors in action,” Marco Martinaglia, vice president of marketing at Bota Systems, told The Robot Report. “You’ll see a live demonstration of inertia compensation with a handheld device, and a Mecademic Robot equipped with our cutting-edge MiniONE Pro six-axis sensor will perform automated assembly and deburring tasks.”

The company said it designed its latest sensors for humanoids, industrial, and medical robots. It claimed that they can improve functions in fields such as welding and minimally invasive surgeries.

Bota Systems added that its force-torque sensors can give robots a sense of touch, enabling them to accurately and reliably perform tasks that were previously only possible with manual operators.

Bota Systems designs for ease of integration

“We are particularly excited to have just announced the release of our latest sensor, the PixONE,” said Ilias Patsiaouras, co-founder and chief technology officer of Bota Systems.

“The PixONE sensor’s innovative hollow shaft design allows it to be seamlessly integrated between the robot’s arm and the end-of-arm tooling [EOAT], maintaining the integrity of internal cable routing,” he added. “This design is particularly advantageous as many robotic arm manufacturers and OEMs are moving towards internal routing to eliminate cable tangles and motion restrictions.”

Bota Systems is an official distribution and integration partner of Universal Robots and Mecademic.

In October 2023, the company added NEXT Robotics to its distributor network. NEXT is now its official distributor for the German-speaking countries of Germany, Austria, and Switzerland. That same month, Bota Systems raised $2.5 million in seed funding.

See sensors at the Robotics Summit & Expo

“Our vision is to equip robots with the sense of touch, making them not only safer and more user-friendly, but also more collaborative,” Klajd Lika, co-founder and CEO of Bota Systems, told The Robot Report. “We look forward to the Robotics Summit and Expo because it brings together the visionaries and brightest minds of the industry — this interaction is valuable for us to shape the development of our next generation of innovative sensors,” 

This will be the largest Robotics Summit & Expo ever. It will include more than 200 exhibitors, various networking opportunities, a Women in Robotics breakfast, a career fair, an engineering theater, a startup showcase, and more. Registration is now open for the event.

The post Bota Systems to showcase its latest sensors at Robotics Summit appeared first on The Robot Report.

]]>
https://www.therobotreport.com/bota-systems-to-showcase-its-latest-sensors-at-robotics-summit/feed/ 0
March 2024 robotics investments total $642M https://www.therobotreport.com/march-2024-robotics-investments-total-642m/ https://www.therobotreport.com/march-2024-robotics-investments-total-642m/#respond Thu, 18 Apr 2024 14:14:18 +0000 https://www.therobotreport.com/?p=578749 March 2024 robotics funding was buoyed by significant investment into software and drone suppliers.

The post March 2024 robotics investments total $642M appeared first on The Robot Report.

]]>
March 2024 robotics investments fell from the prior month.

Chinese and U.S. companies led March 2024 robotics investments. Credit: Eacon Mining, Dan Kara

Thirty-seven robotics firms received funding in March 2024, pulling in a total monthly investment of $642 million. March’s investment figure was significantly less than February’s mark of approximately $2 billion, but it was in keeping with other monthly investments in 2023 and early 2024 (see Figure 1, below).

March2024 investments dropped from the previous month.

California companies secure investment

As described in Table 1 below, the two largest robotics investments in March were secured by software suppliers. Applied Intuition, a provider of software infrastructure to deploy autonomous vehicles at scale, received a $250 million Series E round, while Physical Intelligence, a developer of foundation models and other software for robots and actuated devices, attracted $70 million in a seed round. Both firms are located in California.

Other California firms receiving substantial rounds included Bear Robotics, a manufacturer of self-driving indoor robots that raised a $60 million Series C round, and unmanned aerial system (UAS) developer Firestorm, whose seed funding was $20 million. For a PDF version of Table 1, click here.

March 2024 robotics investments

CompanyAmount ($)RoundCountryTechnology
Agilis Robotics10,000,000Series AChinaSurgical/interventional systems
AloftEstimateOtherU.S.Drones, data acquisition / processing / management
Applied Intuition250,000,000Series EU.S.Software
Automated Architecture3,280,000EstimateU.K.Micro-factories
Bear RoboBear Roboticstics60,000,000Series CU.S.Indoor mobile platforms
BIOBOT Surgical18,000,000Series BSingaporeSurgical systems
Buzz Solutions5,000,000OtherU.S.Drone inspection
Cambrian Robotics3,500,000SeedU.K.Machine vision
Coctrl13,891,783Series BChinaSoftware
DRONAMICS10,861,702GrantU.K.Drones
Eacon Mining41,804,272Series CChinaAutonomous transportation, sensors
ECEON RoboticsEstimatePre-seedGermanyAutonomous forklifts
ESTAT AutomationEstimateGrantU.S.Actuators / motors / servos
Fieldwork Robotics758,181GrantU.K.Outdoor mobile manipulation platforms, sensors
Firestorm Labs20,519,500SeedU.S.Drones
Freespace RoboticsEstimateOtherU.S.Automated storage and retrieval systems
Gather AI17,000,000Series AU.S.Drones, software
Glacier7,700,000OtherU.S.Articulated robots, sensors
IVY TECH Ltd.421,435GrantU.K.Outdoor mobile platforms
KAIKAKUEstimatePre-seedU.K.Collaborative robots
KEF RoboticsEstimateGrantU.S.Drone software
Langyu RobotEstimateOtherChinaAutomated guided vehicles, software
Linkwiz2,679,725OtherJapanSoftware
MotionalEstimateSeedU.S.Autonomous transportation systems
Orchard Robotics3,800,000Pre-seedU.S.Crop management
Pattern Labs8,499,994OtherU.S.Indoor and outdoor mobile platforms
Physical Intelligence70,000,000SeedU.S.Software
PiximoEstimateGrantU.S.Indoor mobile platforms
Preneu11,314,492Series BKoreaDrones
QibiTech5,333,884OtherJapanSoftware, operator services, uncrewed ground vehicles
Rapyuta RoboticsEstimateOtherJapanIndoor mobile platforms, autonomous forklifts
RIOS Intelligent Machines13,000,000Series BU.S.Machine vision
RITS13,901,825Series AChinaSensors, software
Robovision42,000,000OtherBelgiumComputer vision, AI
Ruoyu Technology6,945,312SeedChinaSoftware
Sanctuary Cognitive SystemsEstimateOtherCanadaHumanoids / bipeds, software
SeaTrac Systems899,955OtherU.S.Uncrewed surface vessels
TechMagic16,726,008Series CJapanArticulated robots, sensors
Thor PowerEstimateSeedChinaArticulated robots
Viam45,000,000Series BGermanySmart machines
WIRobotics9,659,374Series AS. KoreaExoskeletons, consumer, home healthcare
X SquareEstimateSeedU.S.Software
YindatongEstimateSeedChinaSurgical / interventional systems
Zhicheng PowerEstimateSeries AChinaConsumer / household
Zhongke HuilingEstimateSeedChinaHumanoids / bipeds, microcontrollers / microprocessors / SoC

Drones get fuel for takeoff in March 2024

Providers of drones, drone technologies, and drone services also attracted substantial individual investments in March 2024. Examples included Firestorm and Gather AI, a developer of inventory monitoring drones whose Series A was $17 million.

In addition, drone services provider Preneu obtained $11 million in Series B funding, and DRONAMICS, a developer of drone technology for cargo transportation and logistics operations, got a grant worth $10.8 million.

Companies in U.S. and China received the majority of the March 2024 funding, at $451 million and $100 million, respectively (see Figure 2, below).

Companies based in Japan and the U.K. were also well represented among the March 2024 investment totals. Four companies in Japan secured a total of $34.7 million, while an equal number of firms in the U.K. attracted $13.5 million in funding.

 

March 2024 robotics investment by country.

Nearly 40% of March’s robotics investments came from a single Series E round — that of Applied Intuition. The remaining funding classes were all represented in March 2024 (Figure 3, below).

March 2024 robotics funding by type and amounts.

Editor’s notes

What defines robotics investments? The answer to this simple question is central in any attempt to quantify them with some degree of rigor. To make investment analyses consistent, repeatable, and valuable, it is critical to wring out as much subjectivity as possible during the evaluation process. This begins with a definition of terms and a description of assumptions.

Investors and investing

Investment should come from venture capital firms, corporate investment groups, angel investors, and other sources. Friends-and-family investments, government/non-governmental agency grants, and crowd-sourced funding are excluded.

Robotics and intelligent systems companies

Robotics companies must generate or expect to generate revenue from the production of robotics products (that sense, analyze, and act in the physical world), hardware or software subsystems and enabling technologies for robots, or services supporting robotics devices. For this analysis, autonomous vehicles (including technologies that support autonomous driving) and drones are considered robots, while 3D printers, CNC systems, and various types of “hard” automation are not.

Companies that are “robotic” in name only, or use the term “robot” to describe products and services that do not enable or support devices acting in the physical world, are excluded. For example, this includes “software robots” and robotic process automation. Many firms have multiple locations in different countries. Company locations given in the analysis are based on the publicly listed headquarters in legal documents, press releases, etc.

Verification

Funding information is collected from several public and private sources. These include press releases from corporations and investment groups, corporate briefings, market research firms, and association and industry publications. In addition, information comes from sessions at conferences and seminars, as well as during private interviews with industry representatives, investors, and others. Unverifiable investments are excluded and estimates are made where investment amounts are not provided or are unclear.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


The post March 2024 robotics investments total $642M appeared first on The Robot Report.

]]>
https://www.therobotreport.com/march-2024-robotics-investments-total-642m/feed/ 0
Teledyne FLIR IIS announces new Bumblebee X stereo vision camera https://www.therobotreport.com/teledyne-flir-iis-announces-new-bumblebee-x-stereo-vision-camera/ https://www.therobotreport.com/teledyne-flir-iis-announces-new-bumblebee-x-stereo-vision-camera/#respond Tue, 16 Apr 2024 21:55:29 +0000 https://www.therobotreport.com/?p=578731 Bumblebee X is a new GigE powered stereo imaging solution that delivers high-accuracy and low-latency for robotic guidance and pick & place applications.

The post Teledyne FLIR IIS announces new Bumblebee X stereo vision camera appeared first on The Robot Report.

]]>
teledyne flir logo and multiple products in the background.

Bumblebee X is a new GigE-powered stereo imaging solution that delivers high-accuracy and low-latency for robotic guidance and pick-and-place applications. | Credit: Teledyne FLIR

Teledyne FLIR IIS (Integrated Imaging Solutions) today announced the new Bumblebee X series – an advanced stereo-depth vision solution optimized for multiple applications. The imaging device is a comprehensive industrial-grade (IP67) stereo vision solution with onboard processing to build successful systems for warehouse automation, robotics guidance, and logistics.

Bumblebee X 5GIGE delivers on the essential need for a comprehensive and real-time stereo vision solution, the Wilsonville, Ore.-based company says. Customers can test and deploy depth sensing systems that work up to ranges of 20 meters with the wide baseline solution.

product image showing front and rear of the camera.

The Teledyne FLIR Bumblebee X camera is packaged in an IP76 enclosure, and ready for industrial use cases. | Credit: Teledyne FLIR

Available in three configurations

The new camera is available in three different configurations, which are identical except for the field of view (FOV) of the camera lens. Teledyne designed the camera to operate accurately across varying distances. The low latency and GigE networking make it ideal for real-time applications such as autonomous mobile robots, automated guided vehicles, pick and place, bin picking, and palletization, the company said. 

“We’re thrilled to announce the release of Bumblebee X, a new comprehensive solution for tackling complex depth sensing challenges with ease,” said Sadiq Panjwani, General Manager at Teledyne FLIR IIS. “Our team’s extensive stereo vision expertise and careful attention to customer insights have informed the design of the hardware, software, and processing at the core of Bumblebee X. With high accuracy across a large range of distances, this solution is perfect for factories and warehouses.”

Specifications

a table of specs for the teledyne bumblebee camera configurations.

This table compares the specs for the three different configurations of the Bumblebee X camera. Check the website for actual specs. | Credit: Teledyne FLIR

Key features include:

  • Factory-calibrated 9.4 in (24 cm) baseline stereo vision with 3 MP sensors for high accuracy and low latency real-time applications
  • IP67 industrial-rated vision system with ordering options of color and monochrome, different field-of-views, and 1GigE or 5GigE PoE
  • Onboard processing to output a depth map and color data for point cloud conversion and colorization
  • Ability to trigger an external pattern projector and synchronize multiple systems together for more precise 3D depth information

Teledyne FLIR manages a software library with articles, example code, and Windows, Linux, and Robotics Operating System (ROS) support. Order requests will be accepted at the end of Q2, 2024.

The post Teledyne FLIR IIS announces new Bumblebee X stereo vision camera appeared first on The Robot Report.

]]>
https://www.therobotreport.com/teledyne-flir-iis-announces-new-bumblebee-x-stereo-vision-camera/feed/ 0
Micropsi Industries’ MIRAI 2 offers faster deployment and scalability https://www.therobotreport.com/micropsi-industries-mirai-2-offers-faster-deployment-and-scalability/ https://www.therobotreport.com/micropsi-industries-mirai-2-offers-faster-deployment-and-scalability/#respond Wed, 20 Mar 2024 20:47:18 +0000 https://www.therobotreport.com/?p=578226 MIRAI 2 comes with five new features that Micropsi Industries says enhance manufacturers' ability to reliably solve automation tasks. 

The post Micropsi Industries’ MIRAI 2 offers faster deployment and scalability appeared first on The Robot Report.

]]>
MIRAI 2.

MIRAI 2 is the latest generation of Micropsi Industries’ AI-vision software. | Source: Micropsi Industries

Micropsi Industries today announced MIRAI 2, the latest generation of its AI-vision software for robotic automation. MIRAI 2 comes with five new features that the company says enhance manufacturers’ ability to reliably solve automation tasks with variance in position, shape, color, lighting, or background. 

The Berlin, Germany-based company says MIRAI 2 offers users even greater reliability, easier and faster deployment, and robot-fleet scalability. MIRAI 2 is available immediately. 

“MIRAI 2 is all about scale: It’s MIRAI for more powerful robots, larger fleets of robots, and tougher physical environments, and it brings more tools to prepare for changes in the environment,” Ronnie Vuine, founder of Micropsi Industries and responsible for product development, said in a release. “We’ve let our most demanding automotive OEM customers drive the requirements for this version without sacrificing the simplicity of the product. It still wraps immensely powerful machine learning in a package that delivers quick and predictable success and is at home in the engineering environment it’s being deployed in.”

5 new functions available with MIRAI 2

MIRAI is an advanced AI-vision software system that enables robots to dynamically respond to varying conditions within the factory environment. Micropsi Industries highlighted five new functions available with MIRAI 2:

  • Robot skill-sharing: This new function allows users to share skills between multiple robots at the same site or elsewhere. If the conditions at the sites are identical, which could include lighting, background, and more, then users need very little or no additional training when adding installations. The company says it can also handle small differences in conditions by recording data from multiple installations into a single, robust skill. 
  • Semi-automatic data recording: Semi-automatic training allows users to record episodes of data for skills without having to hand-guide the robot. Micropsi Industries said this feature reduces the workload on users and increases the quality of recorded data. Additionally, MIRAI can now automatically record all relevant data. Users only need to prepare the training situations and corresponding robot target poses.
  • No F/T sensor: Users can train and run skills without connecting a force/torque sensor. The company says this reduces costs, simplifies tool geometry and cabling setup, and makes skill applications more robust and easier to train overall. 
  • Abnormal condition detection: MIRAI can now be configured to stop skills when unexpected conditions are encountered, allowing users to handle these exceptions in their robot program or alert a human operator.
  • Industrial PC: The MIRAI software can now be run on a selection of industrial-grade hardware for higher dependability in rough factory conditions.

MIRAI 2 detects unexpected workspace situations

MIRAI can pick up on variances in position, shape, color, lighting, and background. It can operate with real factory data without the need for CAD data, controlled light, visual feature predefinition, or extensive knowledge of computer vision. 

MIRAI 2 offers customers improved reliability thanks to its ability to detect unexpected workspace situations. The system has a new, automated way to collect training data and the option to run the software on the highest industry-standard PCs. This results in higher dependability in rough factory conditions. 

MIRAI 2’s new features assist in recording the required data for training robots, which means that training the system is easier and faster. Additionally, the system comes equipped with MIRAI skills, which are trained guidelines that tell robots how to behave when performing a desired action. These can now be easily and quickly shared with an entire fleet of robots. 

“By integrating new features and capabilities into our offerings, we can address the unique challenges faced by these industries even more effectively,” Gary Jackson, recently appointed CEO of Micropsi Industries, said in a release. “Recognizing the complexities of implementing advanced AI in robotic systems, we’ve assembled expert teams that combine our in-house talent with select system integration partners to ensure that our customers’ projects are supported successfully, no matter how complex the requirements.”

The post Micropsi Industries’ MIRAI 2 offers faster deployment and scalability appeared first on The Robot Report.

]]>
https://www.therobotreport.com/micropsi-industries-mirai-2-offers-faster-deployment-and-scalability/feed/ 0
Slamcore Aware provides visual spatial intelligence for intralogistics fleets https://www.therobotreport.com/slamcore-aware-provides-visual-spatial-intelligence-for-intralogistics-fleets/ https://www.therobotreport.com/slamcore-aware-provides-visual-spatial-intelligence-for-intralogistics-fleets/#respond Mon, 11 Mar 2024 12:01:31 +0000 https://www.therobotreport.com/?p=578119 Slamcore Aware combines the Slamcore SDK with industrial-grade hardware to provide robot-like localization for manually driven vehicles.

The post Slamcore Aware provides visual spatial intelligence for intralogistics fleets appeared first on The Robot Report.

]]>
Slamcore Aware identifies people and other vehicles for enhanced safety and efficiency. Source: Slamcore

Slamcore Aware is designed to be simple and quick to commission. Source: Slamcore

Just as advanced driver-assist systems, or ADAS, mark progress toward autonomous vehicles, so too can spatial intelligence assist manually driven vehicles in factories and warehouses. At MODEX today, Slamcore Ltd. launched Slamcore Aware, which it said can improve the accuracy, robustness, and scalability of 3D localization data for tracking intralogistics vehicles.

“Prospective customers tell us that they are looking for a fast-to-deploy and scalable method that will provide the location data they desperately need to optimize warehouse and factory intralogistics for speed and safety,” stated Owen Nicholson, CEO of Slamcore. “Slamcore Aware marks a significant leap forward in intralogistics management bringing the power of visual spatial awareness to almost any vehicle in a way that is scalable and can cope with the highly dynamic and complex environments inside today’s factories and warehouses.”

Robots and autonomous machines need to efficiently locate themselves, plus map and understand their surroundings in real time, according to Slamcore. The London-based company said its hardware and software can help developers and manufacturers with simultaneous localization and mapping (SLAM).

Slamcore asserted that its spatial intelligence software is accurate, robust, and computationally efficient. It works “out of the box” with standard sensors and can be tuned for a wide range of custom sensors or compute, accelerating time to market, said the company.

Slamcore Aware brings AMR accuracy to vehicles

Slamcore Aware collects and processes visual data to provide rich, real-time information on the exact position and orientation of manually driven vehicles, said Slamcore. Unlike existing systems, the new product can scale easily across large, complex, and ever-changing industrial sites, the company claimed.

Slamcore Aware combines the Slamcore software development kit (SDK) with industrial-grade hardware, providing a unified approach for fast installation on intralogistics vehicles and integration with new and existing Real Time Location Systems (RTLS).

It incorporates AI to perceive and classify people and other vehicles, said Slamcore. RTLS applications can use this enhanced data to significantly improve efficiency and safety of operations, it noted.

The new product brings SLAM technologies developed for autonomous mobile robots (AMRs) to manual vehicles, providing estimation of location and orientation of important assets with centimeter-scale precision, said the company.

With stereo cameras and advanced algorithms, the Slamcore Aware module can automatically calculate the location of the vehicle it is fitted to and then create a map of a facility as the vehicle moves around. It can note changes to layout and the position of vehicles, goods, and people, even in highly dynamic environments, Slamcore said.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


‘Inside-out’ approach offers scalability

Existing navigation systems require the installation of receiver antennas across facilities to provide “line-of-sight” connectivity, said Slamcore. However, they become more expensive as facilities scale, with large or complex sites needing hundreds of antennas to track even a handful of vehicles.

Even with this expensive infrastructure, coverage is often unreliable, reducing the effectiveness of RTLS and warehouse robots, Slamcore said. The company said Slamcore Aware addresses these industry pain points.

The system takes an “inside-out” approach that scales in line with the number of vehicles deployed, regardless of the areas they must cover or the complexity of internal layouts. As new vehicles are added to the fleet, an additional module can be simply fitted to each one so that every vehicle automatically and continuously determines its location wherever it is across the whole site, said Slamcore in a release.

Visual spatial intelligence data is processed at the edge, onboard the vehicle, explained the company. Position and orientation data is shared via a lightweight and flexible application programming interface (API) for use in nearly any route-planning, analytics, and optimization platform without compromising performance, it said.

Slamcore is offering Slamcore Aware to facility operators, fleet management and intralogistics specialists, systems integrators, and other RTLS specialists. The company is exhibiting at MODEX in Atlanta for the first time this week at Booth A13918. It will also be at LogiMAT in Stuttgart, Germany.

 

The post Slamcore Aware provides visual spatial intelligence for intralogistics fleets appeared first on The Robot Report.

]]>
https://www.therobotreport.com/slamcore-aware-provides-visual-spatial-intelligence-for-intralogistics-fleets/feed/ 0
RIOS Intelligent Machines raises Series B funding, starts rolling out Mission Control https://www.therobotreport.com/rios-intelligent-machines-raises-series-b-funding-starts-rolls-out-mission-control/ https://www.therobotreport.com/rios-intelligent-machines-raises-series-b-funding-starts-rolls-out-mission-control/#comments Fri, 08 Mar 2024 15:56:52 +0000 https://www.therobotreport.com/?p=578111 RIOS has gotten investment from Yamaha and others to continue developing machine vision-driven robotics for manufacturers.

The post RIOS Intelligent Machines raises Series B funding, starts rolling out Mission Control appeared first on The Robot Report.

]]>
RIOS Intelligent Machines works with NVIDIA Isaac Sim

RIOS works with NVIDIA Isaac Sim and serves the wood-products industry. Source: RIOS Intelligent Machines

RIOS Intelligent Machines Inc. this week announced that it has raised $13 million in Series B funding, co-led by Yamaha Motor Corp. and IAG Capital Partners. The company said it plans to use the investment to develop and offer artificial intelligence and vision-driven robotics, starting with a product for the lumber and plywood-handling sector.

Menlo Park, Calif.-based RIOS said its systems can enhance production efficiency and control. The company focuses on three industrial segments: wood products, beverage distribution, and packaged food products.

RIOS works with NVIDIA Omniverse on factory simulations. It has also launched its Mission Control Center, which uses machine vision and AI to help manufacturers improve quality and efficiency.

RIOS offers visibility to manufacturers

“Customers in manufacturing want a better way to introspect their production — ‘Why did this part of the line go down?'” said Clinton Smith, co-founder and CEO of RIOS. “But incumbent tools have not been getting glowing reviews. Our standoff vision system eliminates a lot of that because our vision and AI are more robust.”

The mission-control product started as an internal tool and is now being rolled out to select customers, Smith told The Robot Report. “We’ve observed that customers want fine-grained control of processes, but there are a lot of inefficiencies, even at larger factories in the U.S.”

Manufacturers that already work with tight tolerances, such as in aerospace or electronics, already have well-defined processes, he noted. But companies with high SKU turnover volumes, such as with seasonal variations, often find it difficult to rely on a third party’s AI, added Smith.

“Mission Control is a centralized platform that provides a visual way to visualize processes and to start to interact with our robotics,” he explained. ‘We want operators to identify what to work on and what metrics to count for throughput and ROI [return on investment], but if there’s an error on the data side, it can be a pain to go back to the database.”

Smith shared the example of a bottlecap tracker. In typical machine learning, this requires a lot of data to be annotated before training models and then looking at the results.

With RIOS Mission Control, operators can monitor a process and select a counting zone. They can simply draw a box around a feature to be annotated, and the system will automatically detect and draw comparisons, he said.

“You place a system over the conveyor, pick an item, and you’re done,” said Smith. “It’s not just counting objects. For example, our wood products customers want to know where there are knots in boards to cut around. It could also be used in kitting applications.”

RIOS is releasing the feature in phases and is working on object manipulation. Smith said the company is also integrating the new feature with its tooling. In addition, RIOS is in discussions with customers, which can use its own or their existing cameras for Mission Control.

Investors express confidence in automation approach

Yamaha has been an investor in RIOS Intelligent Machines since 2020. The vehicle maker said it has more than doubled its investment in RIOS, demonstrating its confidence in the company’s automation technologies and business strategy.

IAG Capital Partners is a private investment group in Charleston, S.C. The firm invests in early-stage companies and partners with innovators to build manufacturing companies. Dennis Sacha, partner at IAG, will be joining the RIOS board of directors.

“RIOS’s full production vision — from automation to quality assurance to process improvement to digital twinning — and deep understanding of production needs positions them well in the world of manufacturing,” said Sacha, who led jet engine and P-3 production for six years during his career in the U.S. Navy.

In addition, RIOS announced nearly full participation from its existing investors, including Series A lead investor, Main Sequence, which doubled its pro-rata investment. RIOS will be participating in MODEX, GTC, and Automate.


SITE AD for the 2024 RoboBusiness registration now open.Register now.


The post RIOS Intelligent Machines raises Series B funding, starts rolling out Mission Control appeared first on The Robot Report.

]]>
https://www.therobotreport.com/rios-intelligent-machines-raises-series-b-funding-starts-rolls-out-mission-control/feed/ 1
Cambrian Robotics obtains seed funding to provide vision for complex tasks https://www.therobotreport.com/cambrian-robotics-obtains-seed-funding-to-provide-vision-for-complex-tasks/ https://www.therobotreport.com/cambrian-robotics-obtains-seed-funding-to-provide-vision-for-complex-tasks/#respond Fri, 08 Mar 2024 14:17:10 +0000 https://www.therobotreport.com/?p=578107 Cambrian will use the funding to continue its mission of giving industrial robots human-level capabilities for complex tasks.

The post Cambrian Robotics obtains seed funding to provide vision for complex tasks appeared first on The Robot Report.

]]>
Cambrian Robotics is applying machine vision to industrial robots

Cambrian is developing machine vision to give industrial robots new capabilities. Source: Cambrian Robotics

Machine vision startup Cambrian Robotics Ltd. this week announced that it has raised $3.5 million in seed+ funding. The company said it plans to use the investment to continue developing its artificial intelligence platform to enable robot arms “to surpass human capabilities in complex vision-based tasks across a variety of industries.”

Cambrian Robotics said its technology “empowers to automate a broad range of tasks, particularly those in advanced manufacturing and quality assurance that demand high precision and accuracy within dynamic environments. The London-based company has offices in Augsburg, Germany, and the U.S.

Cambrian noted that its executive team, led by CEO Miika Satori, has over 50 years of combined experience in AI and robotics. Joao Seabra, chief technology officer, is an award-winning roboticist, and Dr. Alexandre Borghi, head of AI, previously led research teams at a $3 billion AI chip startup.

“We are incredibly excited about the possibilities that our recent fundraising opens up,” said Satori. “Our primary goals are to enhance the scalability of the product and strengthen our sales and operations in our main target markets.”

“In addition, we are bringing new AI-vision-based skills to robot arms, further pushing boundaries in the field of robotics,” he added. “We are equally thrilled to begin collaborating with our new investors, whose support is pivotal in driving these advancements forward.”


SITE AD for the 2024 RoboBusiness registration now open.Register now.


Cambrian Robotics vision already in use

Cambrian Robotics claimed that its AI-driven vision software and camera hardware enables existing robots to automate complex tasks that were previously only possible with manual methods. It said its systems enable robots to execute intricate assembly processes, bin picking, kitting, and pick-and-place operations “with unmatched accuracy in any lighting condition — a true breakthrough compared to current industry-leading AI vision capabilities.”

In addition, Cambrian can be installed in about half a day, works with all major industrial and collaborative robots, and can pick microparts precisely and in less than 200ms, said the company. Cambrian claimed that its technology is unique in that it can pick a wide range of parts, including transparent, plastic, and shiny metal.

Appliance manufacturers globally have deployed Cambrian for monitoring quality assurance and manufacturing defects that were previously unseen to the human eye, the company said. Cambrian is testing and deploying its vision systems to leading manufacturers including Toyota, Audi, Suzuki, Kao, and Electrolux.

“Although in our factories we have a high level of automation, we still have an important quantity of flexible components and manual processes, which add variability,” said Jaume Soriano, an industrial engineer at Electrolux Group. “Cambrian helps us keep moving toward a more automated manufacturing reality while being able to deal with variable scenarios.”

Cybernetix Ventures leads investment

Cybernetix Ventures and KST Invest GmbH led Cambrian Robotics’ seed funding, with participation from Yamaha Motor Ventures and Digital Media Professionals (DMP).

“Machine vision is a crowded space, but Cambrian has strong differentiation with its unique ability to identify small and transparent items with proprietary visual AI software,” said Fady Saad, founder and general partner of Cybernetix, who will join Cambrian’s board of directors. “Miika and his exceptional team have also managed to bring the product to market with active revenue from top brands.”

Boston-based Cybernetix Ventures is a venture capital firm investing into early-stage robotics, automation, and industrial AI startups. It offers its expertise to companies poised to make major impacts in sectors including advanced manufacturing, logistics/warehousing, architecture, engineering and construction and healthcare/medical devices.

KST Invest is a private fund established by one of the owner families of a leading German industrial automation firm. The fund has the objective to invest in robotics and advanced manufacturing among other themes. “Innovation is the livelihood of any business in industrial automation, specifically the combination of vision and robotics,” it said.

Cambrian is also backed by ff Venture Capital (ffVC), which invested in the company’s seed round. ffVC initially seeded Cambrian after the startup graduated from its accelerator, AI Nexus Lab, in partnership with New York University’s Tandon School of Engineering in Brooklyn.

Cambrian is already working with major manufacturers. Source: Cambrian Robotics

Cambrian is already working with major manufacturers. Source: Cambrian Robotics

The post Cambrian Robotics obtains seed funding to provide vision for complex tasks appeared first on The Robot Report.

]]>
https://www.therobotreport.com/cambrian-robotics-obtains-seed-funding-to-provide-vision-for-complex-tasks/feed/ 0
Pleora adds RapidPIX lossless compression technology https://www.therobotreport.com/pleora-adds-rapidpix-lossless-compression-technology/ https://www.therobotreport.com/pleora-adds-rapidpix-lossless-compression-technology/#respond Wed, 21 Feb 2024 13:58:46 +0000 https://www.therobotreport.com/?p=577933 Pleora Technologies said RapidPIX meets the low latency and reliability demands of machine vision and medical imaging applications.

The post Pleora adds RapidPIX lossless compression technology appeared first on The Robot Report.

]]>
Pleora Technologies' iPORT NTx-Mini-LC with RapidPIX compression.

Pleora Technologies’ iPORT NTx-Mini-LC with RapidPIX compression. | Source: Pleora Technologies

Pleora Technologies has introduced its patented RapidPIX lossless compression technology. The company said RapidPix can increase data throughput by almost 70% while meeting the low latency and reliability demands of machine vision applications.

The Kanata, Ontario, Canada-based company said RapidPIX is initially available on Pleora’s new iPORT NTx-Mini-LC platform, which provides a compression-enabled drop-in upgrade of the NTX-Mini embedded interface.

“System designers have been asking us for ways to increase resolution and frame rates over existing Ethernet infrastructure for machine vision applications, without compromising on latency or image data integrity that Pleora is known for,” Jonathan Hou, president of Pleora Technologies, said. “With RapidPIX we’re meeting this demand.

“Pleora’s patented compression technique delivers bandwidth advantages that increase performance without impacting the data quality required for accurate processing in critical applications. As an immediate advantage, designers can cost-effectively increase data throughput while retaining existing installed infrastructure. While boosting performance our compression technology helps further conserve valuable resources, including power consumption, to reduce system costs.”

Pleora: added compression has many benefits

Pleora said that with added compression, engineers can deploy the iPORT NTx-Mini-LC to support low latency transmission of GigE Vision-compliant packets at more than 1.5 Gbps throughput rates over existing 1 Gb Ethernet infrastructure. 

With RapidPIX, systems feed imagining data into the RapidPIX encoder. The encoder then analyses imaging data against compression profiles and selects the best approach based on the application requirements. Pleora said latency performance is less than two lines, or approximately 0.022 milliseconds, when deployed on a system operating at 1024×1024 resolution, Mono8 pixel format with two taps at 40 MHz. The company says users can further reduce latency performance depending on the number of taps and pixel format. 

Pleora said the lossless compression system also minimizes the amount of data transmitted over the network. This reduces power consumption. Additionally, the mathematically lossless compression technology supports multi-taps and multi-components. 

To speed time-to-market, Pleora offers the iPORT NTx-Mini-LC with RapidPIX Development Kit. The company said this kit helps manufacturers develop system or camera prototypes and proof-of-concepts easily and rapidly, often without undertaking hardware development.

The post Pleora adds RapidPIX lossless compression technology appeared first on The Robot Report.

]]>
https://www.therobotreport.com/pleora-adds-rapidpix-lossless-compression-technology/feed/ 0
The role of ToF sensors in mobile robots https://www.therobotreport.com/the-role-of-tof-sensors-in-mobile-robots/ https://www.therobotreport.com/the-role-of-tof-sensors-in-mobile-robots/#respond Tue, 23 Jan 2024 17:52:25 +0000 https://www.therobotreport.com/?p=568708 Time-of-flight or ToF sensors provide mobile robots with precise navigation, low-light performance, and high frame rates for a range of applications.

The post The role of ToF sensors in mobile robots appeared first on The Robot Report.

]]>
ToF sensors provide 3D information of the world around a mobile robot, providing important data to the robots perception algorithms. | Credit: E-con Systems

ToF sensors provide 3D information of the world around a mobile robot, providing important data to the robots perception algorithms. | Credit: E-con Systems

In the ever-evolving world of robotics, the seamless integration of technologies promises to revolutionize how humans interact with machines. An example of transformative innovation, the emergence of time-of-flight or ToF sensors is crucial in enabling mobile robots to better perceive the world around them.

ToF sensors have a similar application to lidar technology in that both use multiple sensors for creating depth maps. However, the key distinction lies in these cameras‘ ability to provide depth images that can be processed faster, and they can be built into systems for various applications.

This maximizes the utility of ToF technology in robotics. It has the potential to benefit industries reliant on precise navigation and interaction.

Why mobile robots need 3D vision

Historically, RGB cameras were the primary sensor for industrial robots, capturing 2D images based on color information in a scene. These 2D cameras have been used for decades in industrial settings to guide robot arms in pick-and-pack applications.

Such 2D RGB cameras always require a camera-to-arm calibration sequence to map scene data to the robot’s world coordinate system. 2D cameras are unable to gauge distances without this calibration sequence, thus making them unusable as sensors for obstacle avoidance and guidance.

Autonomous mobile robots (AMRs) must accurately perceive the changing world around them to avoid obstacles and build a world map while remaining localized within that map. Time-of-flight sensors have been in existence since the late 1970s and have evolved to become one of the leading technologies for extracting depth data. It was natural to adopt ToF sensors to guide AMRs around their environments.

Lidar was adopted as one of the early types of ToF sensors to enable AMRs to sense the world around them. Lidar bounces a laser light pulse off of surfaces and measures the distance from the sensor to the surface.

However, the first lidar sensors could only perceive a slice of the world around the robot using the flight path of a single laser line. These lidar units were typically positioned between 4 and 12 in. above the ground, and they could only see things that broke through that plane of light.

The next generation of AMRs began to employ 3D stereo RGB cameras that provide 3D depth information data. These sensors use two stereo-mounted RGB cameras and a “light dot projector” that enables the camera array to accurately view the projected light on the science in front of the camera.

Companies such as Photoneo and Intel RealSense were two of the early 3D RGB camera developers in this market. These cameras initially enabled industrial applications such as identifying and picking individual items from bins.

Until the advent of these sensors, bin picking was known as a “holy grail” application, one which the vision guidance community knew would be difficult to solve.

The camera landscape evolves

A salient feature is the cameras’ low-light performance which prioritizes human-eye safety. The 6 m (19.6 ft.) range in far mode facilitates optimal people and object detection, while the close-range mode excels in volume measurement and quality inspection.

The cameras return the data in the form of a “point cloud.” On-camera processing capability mitigates computational overhead and is potentially useful for applications like warehouse robots, service robots, robotic arms, autonomous guided vehicles (AGVs), people-counting systems, 3D face recognition for anti-spoofing, and patient care and monitoring.

Time-of-flight technology is significantly more affordable than other 3D-depth range-scanning technologies like structured-light camera/projector systems.

For instance, ToF sensors facilitate the autonomous movement of outdoor delivery robots by precisely measuring depth in real time. This versatile application of ToF cameras in robotics promises to serve industries reliant on precise navigation and interaction.

How ToF sensors take perception a step further

A fundamental difference between time-of-flight and RGB cameras is their ability to perceive depth. RGB cameras capture images based on color information, whereas ToF cameras measure the time taken for light to bounce off an object and return, thus rendering intricate depth perception.

ToF sensors capture data to generate intricate 3D maps of surroundings with unparalleled precision, thus endowing mobile robots with an added dimension of depth perception.

Furthermore, stereo vision technology has also evolved. Using an IR pattern projector, it illuminates the scene and compares disparities of stereo images from two 2D sensors – ensuring superior low-light performance.

In comparison, ToF cameras use a sensor, a lighting unit, and a depth-processing unit. This allows AMRs to have full depth-perception capabilities out of the box without further calibration.

One key advantage of ToF cameras is that they work by extracting 3D images at high frame rates — with the rapid division of the background and foreground. They can also function in both light and dark lighting conditions through the use of active lighting components.

In summary, compared with RGB cameras, ToF cameras can operate in low-light applications and without the need for calibration. ToF camera units can also be more affordable than stereo RGB cameras or most lidar units.

One downside for ToF cameras is that they must be used in isolation, as their emitters can confuse nearby cameras. ToF cameras also cannot be used in overly bright environments because the ambient light can wash out the emitted light source.

what is a tof camera illustration.

A ToF sensor is nothing but a sensor that uses time of flight to measure depth and distance. | Credit: E-con Systems

Applications of ToF sensors

ToF cameras are enabling multiple AMR/AGV applications in warehouses. These cameras provide warehouse operations with depth perception intelligence that enables robots to see the world around them. This data enables the robots to make critical business decisions with accuracy, convenience, and speed. These include functionalities such as:

  • Localization: This helps AMRs identify positions by scanning the surroundings to create a map and match the information collected to known data
  • Mapping: It creates a map by using the transit time of the light reflected from the target object with the SLAM (simultaneous localization and mapping) algorithm
  • Navigation: Can move from Point A to Point B on a known map

With ToF technology, AMRs can understand their environment in 3D before deciding the path to be taken to avoid obstacles. 

Finally, there’s odometry, the process of estimating any change in the position of the mobile robot over some time by analyzing data from motion sensors. ToF technology has shown that it can be fused with other sensors to improve the accuracy of AMRs.

About the author

Maharajan Veerabahu has more than two decades of experience in embedded software and product development, and he is a co-founder and vice president of product development services at e-con Systems, a prominent OEM camera product and design services company. Veerabahu is also a co-founder of VisAi Labs, a computer vision and AI R&D unit that provides vision AI-based solutions for their camera customers.

The post The role of ToF sensors in mobile robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/the-role-of-tof-sensors-in-mobile-robots/feed/ 0
KEF Robotics takes a modular approach to aircraft navigation and autonomy https://www.therobotreport.com/kef-robotics-takes-modular-approach-aircraft-navigation-autonomy/ https://www.therobotreport.com/kef-robotics-takes-modular-approach-aircraft-navigation-autonomy/#respond Thu, 18 Jan 2024 16:00:04 +0000 https://www.therobotreport.com/?p=577308 KEF Robotics says its vision software works with different hardware and software to enable drones to navigate in GPS-denied environments.

The post KEF Robotics takes a modular approach to aircraft navigation and autonomy appeared first on The Robot Report.

]]>
KEF Robotics' Tailwind software provides drone autonomy in GPS-denied environments with visual navigation.

Tailwind provides visual navigation to drones in GPS-denied environments. Source: KEF Robotics

While autopilots have helped fly aircraft for nearly a century, recent improvements in computer vision and autonomy promise to bring more software-based capabilities onboard. KEF Robotics Inc. has been developing technologies to increase aircraft safety, reliability, and range.

Founded in 2018, KEF said it provides algorithms that use camera data to enable autonomous flight across a variety of platforms and use cases. The Pittsburgh-based company works with designers to integrate these autonomy features into their aircraft.

“Our company’s mantra is to provide visual autonomy capabilities with any camera, any drone, or any computer,” said Eric Amoroso, co-founder and chief technology officer of KEF Robotics. “Being flexible and deployable to drones changes the integration from days to hours, as well as providing safe, reliable navigation,” he told The Robot Report.

“Think of us as an alternative to GPS,” said Olga Pogoda, chief operating officer at KEF Robotics. “The situation in Ukraine shows the difficulty of operating without GPS and true autonomy on the aircraft.”

KEF Robotics enables aircraft to operate without signals

“We founded KEF while entering a Lockheed Martin competition, which whittled 200 teams down to nine,” recalled Amoroso. “The drones had to be autonomous, which was a perfect test case for modular, third-party software.”

Since then, KEF Robotics has worked with the Defense Threat Reduction Agency (DTRA), which uses drones with multiple sensors to search for weapons of mass destruction.

The company said its Tailwind visual navigation software can use stereo cameras for hazard detection and avoidance and that it uses machine learning to localize objects and complete missions. This is particularly important for defense and security missions.

“Our long-term goal is to allow an aircraft to complete complex missions with a button push,” said Pogoda. “An operator can provide an overhead image of a building and a general direction.”

“Then, the autonomous aircraft can take off, fly to a location, and conduct a search pattern,” she explained. “It can reroute based on hazards on the way, and then it can take pictures or readings and come back to an operator without transmitting any signals.”

KEF Robotics said Tailwind, which can work at night and long-range, GPS-denied flights, is in testing and on its way to availability. The software has been validated at speeds up to 100 mph and provides closed-loop autonomous operations with drift rates of 2% of the distance traveled. It has not yet been qualified for extreme weather or conditions such as dust, fog, or smoke.

Integration important for modular approach

As with other autonomous systems such as cars, a technology stack with layers of capabilities from different, specialized providers is evolving for aircraft and drones.

“We’re seeing an interesting economic trend in purchasing aircraft — manufacturers are focusing on producing aircraft, and autonomy software is complex,” Pogoda noted. “More companies just want to build the aircraft with open interfaces to allow their customers to add capabilities after the initial delivery.”

To facilitate a more rapid integration of advanced autonomy, the U.S. Department of Defense’s Modular Open Systems Approach (MOSA) is an initiative intended to save money, enable faster and easier equipment upgrades, and improve system interoperability. KEF Robotics is following this approach.

“MOSA says that everything should be open architecture, and the industry must create tools for everything to work together,” said Pogoda.

KEF Robotics has won Small Business Innovation Research (SBIR) grants to advance its technology. How does modular software figure in?

“The Defense Innovation Unit started pushing the MOSES philosophy that companies like KEF Robotics are embracing to rapidly integrate and innovate UAS [uncrewed aircraft system] technology,” Amoroso said. “We specialize in providing plug-and-play visual perception — technology that is expensive and challenging to develop if you’re also designing novel UAS. With MOSA, drone builders can let KEF Robotics focus on the reliability and performance challenges of visual perception while selling a product with state-of-the-art autonomy.”

“The conflict in Ukraine showed the crippling impacts of widespread GPS jamming and the utility of low-cost UAS,” he added. “It’s only through MOSA do we believe that we can circumvent these threats affordably and at scale.”

“KEF offers two forms of our solution,” said Amoroso. “One is for those interested in GPS-denied navigation, collision avoidance, and target localization. It’s a hardware-based payload that includes systems to communicate with an autopilot.”

“There’s also a software-only deployment for drones that may already have such hardware onboard,” he added. “We follow the MOSA philosophy for deploying our software, along with others’ software and camera drivers.”

For example, KEF Robotics’ software can take measurements and localize them, and a third-party architecture can do custom object detection to spot smokestacks, Amoroso said.

KEF Robotics collaborates with Auterion, Duality AI

“Before KEF came along, there was already a great community working on GPS-denied navigation, including Auterion and Cloud Ground Control,” said Amoroso. “But we had teams and companies coming to us saying, ‘How can we get vision navigation to be plug and play?'”

In June 2023, Auterion Government Solutions partnered with KEF Robotics to combine AuterionOS with Tailwind for robots and autonomous systems.

“We have a great relationship with Auterion, which sees the same core needs for ease of integration and reliability,” Amoroso said. “We offer an instantiation of vision-based navigation, but we want to set it up for new players to slot in and offer their solutions more easily, such as a lidar-based state estimator.”

“We started chatting early last year about how we’d work with Auterion Enterprise PX4,” he noted. “Auterion wanted to see a GPS-denied demonstration with its own UAVs, and within 18 hours, we got our system running with autopilot in a closed loop. We’re still doing demos with them and are interested in getting our software working with Auterion’s Skynode.”

In November, KEF Robotics said it was working with Duality AI’s Falcon digital-twin integration program to develop autonomy software for a tethered uncrewed aircraft system (TeUAS) under a U.S. Army SBIR contract. Falcon can simulate different environments and drone configurations.

“It can simulate challenging scenarios like cluttered forests to test our software and drones,” said Amoroso. “This is similar to how simulation can help autonomous vehicles augment safety, with the benefit of being able to deploy different camera configurations and software.”

Why decoupling software and hardware makes sense 

How does KEF divide tasks between its systems and those of its partners? “The industry has already aggregated around some standards, but there are always customizations involved to meet a customer’s needs,” replied Amoroso.

“Some customers will say it’s OK to plug in our navigational messaging, and others prefer a companion computer that can monitor measurements or guidance commands to verify or support their own planning,” he said. “It’s important to be flexible and to understand early on what are the interfaces and to do drone demos to show that we can still execute a mission even if we don’t have full control of position or velocity.”

“But the advantage is, by decoupling autonomy from specific hardware, we can generalize our approach and rapidly integrate on a new platform,” Amoroso said. “If an aircraft has an open design, we can integrate our complex software in less than a day, start flying, and then progress to a tighter integration at a customer’s request.”

KEF Robotics is currently focused on defense applications, with a multi-aircraft demonstration of Tailwind for the Army planned for September 2024.

KEF Robotics has designed its software to be hardware-agnostic

KEF has designed its autonomy software to be hardware-agnostic. Source: KEF Robotics

The post KEF Robotics takes a modular approach to aircraft navigation and autonomy appeared first on The Robot Report.

]]>
https://www.therobotreport.com/kef-robotics-takes-modular-approach-aircraft-navigation-autonomy/feed/ 0
GRIT Vision System applies AI to Kane Robotics’ cobot weld grinding https://www.therobotreport.com/grit-vision-system-applies-ai-kane-robotics-cobot-weld-grinding/ https://www.therobotreport.com/grit-vision-system-applies-ai-kane-robotics-cobot-weld-grinding/#respond Thu, 11 Jan 2024 13:00:53 +0000 https://www.therobotreport.com/?p=577388 Kane Robotics has developed the GRIT Vision System to improve cobot material removal and finishing for customers such as Paul Mueller Co.

The post GRIT Vision System applies AI to Kane Robotics’ cobot weld grinding appeared first on The Robot Report.

]]>
A worker manipulates a Kane cobot to instruct it.

A worker manipulates a Kane cobot to instruct it. Source: Kane Robotics

Kane Robotics Inc. has combined artificial intelligence with visual sensors to enable its collaborative robot to automatically track and grind weld seams with high accuracy and speed. The company said its GRIT Vision System applies computer vision to the GRIT cobot for material-removal tasks such as sanding, grinding, and polishing in manufacturing.

Launched in 2023, the GRIT robot works alongside humans to perform labor-intensive finishing for any size and type of manufacturer, stated Kane Robotics in a release. Though initially designed for material removal in the aerospace industry, the robot can be configured for metalworking, woodworking, and other types of manufacturing, explained the Austin, Texas-based company.

Kane Robotics cited the case of Paul Mueller Co., which sought a more efficient way to grind welds on large steel tanks. The Springfield, Mo.-based stainless steel equipment manufacturer also wanted to reduce fatigue-related injuries and improve working conditions.

The GRIT Vision System demonstrated its skill in live object detection and adaptive recalibration, allowing Paul Mueller to grind different-sized tank shells and various types of weld seams, said Kane Robotics.

Engineer explains the GRIT Vision System

Dr. Arlo Caine, a consulting engineer at Kane Robotics and a member of the company’s advisory board, helped design the GRIT Vision System. He is an expert in robot programming, mechanical product design, collaborative robotics, machine learning, and computer vision.

Caine is also a professor in the Department of Mathematics and Statistics and associate chair and faculty fellow of the Center for Excellence in Math and Science Teaching at California State Polytechnic University – Pomona. He replied to the following questions from The Robot Report about the GRIT Vision System.

We’ve seen a lot of interest in the past few years around applying machine vision and cobots to welding and finishing. How is Kane Robotics’ approach different from others?

Caine: The GRIT Vision System includes a camera integrated with Kane’s cobot arm and proprietary AI software. The AI uses the camera’s images to “think” as it directs the cobot to follow the weld seam. When weld seams are imperfect, the AI’s automatic steering tracks the uneven pattern and redirects the robotic arm accordingly.

Kane engineers teach the AI to recognize a variety of welds prior to installation. Through software updates, the vision system learns to detect variations in the welds and improve grinding accuracy.

This varies from other cobot welding systems because the Kane AI vision system “sees” the path for the grinding tool to follow, even as the weld seam disappears. Most vision systems can see and react to objects, but those objects don’t disappear, as does a weld seam.

Kane’s vision system overcomes this problem by learning the various stages of each particular weld-grinding process so the cobot recognizes when to continue or stop grinding. The cobot world hasn’t seen this before.

Grind Pilot AI in robotic welding operation

Kane’s proprietary AI software reports on robotic welding operation. Click here to enlarge. Source: Kane Robotics

How does the GRIT robot know when a job is completed?

Caine: The cobot does the dull and dirty work of holding the grinder, and the vision system handles the monotony of tracking large seams for long periods. But the human operator still selects how hard to push for the given abrasive, how fast to move along the seam, and how many passes to make to achieve the required finish. A human operator makes the final judgment about when the grinding job is “complete.”

What constitutes “done” means different things to different customers and different applications. For Kane customer Paul Mueller Co., the applications were so numerous and varied that teaching GRIT fully was out of scope for the bid.

Kane taught the system to do the basic work required and left the management to the human operator. This relates back to Kane’s philosophy of simplicity and automating only what is most crucial to help humans do their jobs better.

Kane cobot keeps humans in the loop

While many people think that automation is about robots replacing human workers entirely, why is keeping a human in the loop “indispensable” for these manufacturing tasks?

Caine: Vision systems use machine learning to correct cobots’ movements in near real time. But manufactured parts are rarely exact replicas of their perfect CAD models and precise cobot movements, so human judgment is still needed. Human operators ultimately determine how to best grind the welds.

Human operators decide how to position the tool, what abrasive to use, and when to meet finish specifications. After the GRIT Vision System takes control of the cobot, a live custom interface allows the human operator to assess and adjust as needed.

It’s important to highlight that our system is truly collaborative. The cobot does the tedious and monotonous work, and the skilled operator shapes the finish.

Do you work only with FANUC‘s CRX-10iA/L cobot arm? Was any additional integration necessary?

Caine: The basic operations of the vision system are:

  1. The vision system detects the weld
  2. A guidance algorithm computes robot movements to follow the weld, and
  3. A real-time control program commands the robot controller to make the robotic arm react quickly.

Operations 1 and 2 are robot-independent. Kane has only implemented Operation 3 for all FANUC CRX types, not just the CRX-10iA/l. We are in the process of developing Part 3 for Universal Robots‘ arms.

Each project will have different integrations according to the needs of the job, whether different sizes or styles of grinders or different grinding media.

The Paul Mueller project integrated a 2HP pneumatic belt grinder. The tool can be configured with various contact arms and wheels to use a variety of different belts — 1 to 2 in. wide, 30 in. long — with different abrasive qualities, from coarse to fine to blending.

Kane’s AI program is robust enough to accommodate the use of a variety of tools with the same vision system.

AI learns the welding process

Can you describe in a bit more detail how AI helps with object detection and understanding disappearing weld seams?

Caine: Kane’s proprietary AI software discerns the weld seam from the surrounding material based on dozens of frames per second captured by the GRIT Vision System’s camera. The AI uses a real-time object-detection model to visually identify the weld seam.

Camera and grinding end-of-arm tooling with Grit cobot.

Camera and grinding end-of-arm tooling with Grit cobot. Source: Kane Robotics

The vision system learns the various stages of a particular weld-grinding process so the cobot will recognize when to stop or continue grinding. This capability is still in development, as Kane collects more data from customers on what all the different phases of weld grinding “down to finish” look like.

In each phase of the assigned work, the AI makes many small decisions per second to accomplish its assigned task, but it doesn’t have the executive authority to sequence those tasks or be responsible for quality control … yet.

Did Kane Robotics work with Paul Mueller Co. in the development of the GRIT system?

Caine: Yes, we partnered with Paul Mueller on this development and used the GRIT Vision System and its proprietary AI for the first time in a commercial application with their team.

The Paul Mueller team was able to assess the new product in real time and offer suggestions for adjustments and improvements. They continue to relay data to us as they test and use the system, which further teaches GRIT to understand what a finished product should look like.

What feedback did you get, and how did you address it?

Caine: Paul Mueller has been pleased with the results of the GRIT Vision System’s weld grinding. They are still in the testing and training phase, but they have expressed satisfaction with the design, the intuitive nature of the AI interface and the overall performance of the system.

Paul Mueller suggested that Kane include an option in the AI interface for the operator to specify the liftoff distance before and after grinding, increasing the ability of the system to conduct grinding around obstacles extending from the surface of the tank — such as fittings, lifting lugs, manways, etc.

Paul Mueller also wanted a more powerful tool turning a larger range of belts to apply to larger and smaller seams. Due to the payload limitations of the CRX-10iA/l, we went with the 2HP belt grinder to get the most power we could find for the weight.

During the design process, we met with the lead grinding technician at Paul Mueller to make sure our tool design was robust enough to implement their processes. As a result, we built a system that their operators find useful and intuitive to operate.

Kane offers GRIT Vision System for applications beyond welding

For future applications of AI vision, do you have plans to work with a specific partner or task next?

Caine: Weld grinding is an exciting new space for cobot vision systems. Kane is ready to work with welders and manufacturing teams to employ the GRIT Vision System for grinding all types of welds in multiple industries.

But the GRIT Vision System is not only applicable to weld grinding; we also plan to offer it for other types of operations in various industries, including sanding composite parts in aerospace assembly, polishing metal pieces in automotive manufacturing, sanding wood in furniture-making and related industries, and other material-removal applications.

The post GRIT Vision System applies AI to Kane Robotics’ cobot weld grinding appeared first on The Robot Report.

]]>
https://www.therobotreport.com/grit-vision-system-applies-ai-kane-robotics-cobot-weld-grinding/feed/ 0