Published

Robotic Bin Picking Made Simple(r)

The result of “thinking inside the bin,” is a design featuring a six-axis Yaskawa robot, an IFM Effector 200 photoelectric distance-measuring sensor, and, mounted to an IAI servo-driven slide, a Cognex In-Sight 8000 camera. 
#robotics

Share

Robotically bin-picking randomly oriented components has long been a challenge, one ordinarily solved by using a 3D vision system.

When Systematix (systematix-inc.com), a systems integrator, was presented with the task of developing an automated system to remove car seat lumbar actuator assemblies from a bin and into a wire nest for assembly, its first idea was to use a robot and a 3D sensor.

But then its engineers thought of something. They realized that each actuator in the bin didn’t have to be mapped in all three dimensions but two would suffice. They could simply mount a 2D camera on a vertical slide such that each component is simply measured in X and Y.

Because there are sheets of cardboard separating layers of the randomly oriented parts and those dividers are removed once the parts on top of them are removed, there would be the need to measure the Z axis (i.e., depth) just once per layer.

The result of this “thinking inside the bin” is a design featuring a six-axis Yaskawa robot (motoman.com), an IFM Effector 200 photoelectric distance-measuring sensor (ifm.com), and, mounted to an IAI servo-driven slide (intelligentactuator.com), a Cognex (cognex.com) In-Sight 8000 camera. 

The camera uses RedLine, the latest iteration of PatMax, the geometric pattern-matching technology that Cognex first patented in 1996. Up until then, pattern matching technology relied upon a pixel-grid analysis process called normalized correlation. That method looks for statistical similarity between a gray-level model or reference image of an object and portions of the image to determine the object’s X-Y position. PatMax instead learns an object’s geometry from a reference image using a set of boundary curves tied to a pixel grid and then looks for similar shapes in the image without relying on specific gray levels. This approach, now widely used by numerous machine vision companies, greatly improves how accurately an object can be recognized despite differences in angle, size and shading.

The system not only gets the job done in the required time, but presumably, the use of the long-proven tech was somewhat more cost-effective than a less-straightforward approach.
 

RELATED CONTENT

  • Robotic Exoskeleton Amplifies Human Strength

    The Sarcos Guardian XO Max full-body, all-electric exoskeleton features strength amplification of up to 20 to 1, making 200 pounds—the suit’s upper limit—feel like 10 pounds for the user.

  • On Traffic Jams, Vehicle Size, Building EVs and more

    From building electric vehicles—and training to do so—to considering traffic and its implication on drivers and vehicle size—there are plenty of considerations for people and their utilization of technology in the industry.

  • Ford Invests $25 Million More at Kentucky Truck

    To prepare the Ford Kentucky Truck Plant to launch the Lincoln Navigator and the Ford Expedition last fall, Ford invested approximately $900 million in its Kentucky Truck Plant facility to launch the Lincoln Navigator and Ford Expedition.

Gardner Business Media - Strategic Business Solutions