How would you like to have a robot assistant in your home or at work? It might seem like a sci-fi fantasy but it may be closer to reality than you think. Boston Dynamics is planning to Boston Dynamics wants to release their first commercial robot by the end of this year, and there are already a number of companies working to make collaborative robots or co-bots, safer and more interactive with humans. Researchers from MIT have figured out an algorithm that makes these robots more efficient without compromising the safety, which has been a major concern on factory floors. And biomedical researchers from the University of Houston made a major breakthrough in the brain to computer interfaces that lets prosthetics more accurately anticipate human motion. But before we go further, make sure you’re following us over at Minds.com/subverse where we’ll be posting additional and exclusive content going forward.
Boston Dynamic’s Spot is a four-legged electric robot that can pick up and carry objects with an extendable arm positioned on its so-called “head.” It has perception sensors, stereo and depth cameras, position and force sensors, and an inertial measurement unit, which is an electronic device that measures Spot’s force. The dog-like robot was shown off at the Re:MARS conference in Las Vegas, where robotics, machine learning, and space exploration come together in one place. At the conference, two Spot bots interacted with conference attendees, controlled by two Boston Dynamics employees using modified gaming tablets. Spot can keep itself balanced on uneven terrain and even withstand kicks and shoves, but can’t yet decide for itself where to walk. Once an area is mapped out, however, Spot is able to navigate its environment autonomously, but it still does need human handlers.
At the conference, the handlers showed the crowd the simplicity of operating Spot. With a simple directional pad, the robot can be steered. The control pad allows the operator to see the real-time video feed from its front-facing cameras. Another control allows the handler to operate the robotic arm mounted on its head. As with any new technology, there are still some issues to be worked out. During the demonstration, Spot’s legs suddenly folded up and it collapsed, taking a nose dive to the floor. Boston Dynamics CEO Marc Raibert said they’re still testing Spot in a number of work environments like package delivery and surveying work. Its three-dimensional cameras give it the ability to map construction sites, identify hazards, and track progress. But Spot has a wide range of custom uses and can undertake a variety of tasks. Its arm attachment gives it the ability to manipulate objects and even open doors. At the conference, Spot picked up a toy and offered it to a police dog.
The robots used in factories and warehouses currently only perform tasks that are programmed into them, usually around the time of their creation. Robots like Spot are able to work beside humans in changing environments, reacting to dynamic conditions and other hazards. Raibert showed an example of this on stage when he presented a video of Spot overcoming man-made obstacles while trying to open a door. There are, of course, questions about the actual advantage of employing an expensive robot over a human who can perform the same tasks. Boston Dynamics began developing robots like Spot for the US military over ten years ago. This led to other countries developing their own four-legged robots, like Chinese company Unitree and Swiss company ANYbotics. Boston Dynamics now has to figure out a way to scale the production of these robots to the demand. Raibert said they’re aiming to manufacture one thousand per year, but didn’t say how much they would be charging per robot, though their commercial version will be much less expensive than their competitors’ bots and their own prototypes. He did mention they already have paying customers, including Japanese construction companies who are testing Spot as a progress overseer on worksites.
According to the Verge, Raibert said: “There’s a remarkable number of construction companies we’re talking to, but we have some other applications that are very promising, [including] in hostile environments where the cost of having people there is high.”
There are plenty of uses for Spot that would prevent human workers from operating in unsafe conditions, like disaster zones.
|Robots have been working with human workers for years on factory floors and in warehouses. According to the International Federation of Robotics, there was an average of one hundred eighty-nine robots for every ten thousand workers in the US. Industrial robots that lift and move huge pieces of material for manufacturing are often bolted down to the floor behind fences away from their organic-bodied coworkers to keep them safe. But the next generation of robots are cobots, collaborative robots, which are increasingly mobile and interactive, but opens up the possibility of unintended injuries to the humans they work with.
There are a number of engineers and companies working on new technologies to minimize the risk to humans. Massachusetts-based Veo Robotics introduced Veo FreeMove on Monday, giving robots spatial awareness for obstacles and objects within reach. The company partnered with four of the largest robot manufacturers in the world to add three-dimensional depth sensors and computer vision software to their robots, giving them a monitoring system that signals the robot to stop or slow down if a human-sized object is too close. After the obstruction passes, the robot proceeds with its task. They conducted trials with manufacturers of vehicles, appliances, and packaged-goods, using Xbox Kinect cameras while they build their own sensors for future production.
Veo Ceo and co-founder Patrick Sobalvarro told CNBC, “The collaborative power and force-limited robots have been very useful for assembly of small things. What we would like to do is extend those advantages to all robots, regardless of the size. Whether it’s a robot that can carry a car or a robot that can carry a car door or a robot that moves fast and positions things very precisely.”
Cobots are getting lighter, more versatile, and more interactive with humans, opening them up to industries outside manufacturing and into fields like food services, law enforcement, and health care. They’re not meant to replace human workers, but act more like assistants, although they can be designed to operate with limited guidance or autonomously. Demand for these bots have been growing and companies are doing their best to reduce errors and increase the quality and speed of production. Sobalvarro explained, “What we hear from every factory, every line manager is that they can’t hire enough production workers. The production labor workforce is aging out, and one of the things we see as an advantage of our system is that physical strength will no longer be required for production workers. This company is predicated on the belief that production labor continues to be tremendously important in manufacturing.”
|Although these robots are fitted with safety procedures that keep humans safe, it often comes at the expense of productivity. MIT researchers working with BMW noticed that robots were overly cautious when operating around humans and would waste a lot of time just waiting for workers to pass. So they created a new algorithm that could increase efficiency while still prioritizing safety. The algorithm improves the robots’ ability to predict the trajectory of a human worker as they move, allowing it to move around the workers’ foot traffic instead of freezing up.
According to EurekaAlert, associate professor of aeronautics and astronautics at MIT, Julie Shah, said, “This algorithm builds in components that help a robot understand and monitor stops and overlaps in movement, which is a core part of human motion. This technique is one of the many ways we’re working on robots better understanding people.” Researchers have used algorithms from music and speech processing to align sets of related data, but human motion is more variable than music or speech. Even in repetitive processes, humans make slight variations in their movement each time. Other algorithms record streaming motion data in the form of dots that represent a person’s position over a time span, comparing those dots to a library of common trajectories. These algorithms can be easily confused in some pretty common scenarios, like a person’s temporary pause while moving, because the dots bunch up in one spot. To find a workaround, Shah and her colleague Pem Lasota created a partial trajectory algorithm that aligns segments of the trajectory with a library of previous reference trajectories in real time. This gives a robot the ability to accurately anticipate overlaps and stops in a human’s path.
“Say you’ve executed this much of a motion,” Lasota explains. “Old techniques will say, ‘this is the closest point on this representative trajectory for that motion.’ But since you only completed this much of it in a short amount of time, the timing part of the algorithm will say, ‘based on the timing, it’s unlikely that you’re already on your way back, because you just started your motion.’”
The research team found that their algorithm was better at estimating a person’s progress through a trajectory compared to commonly used trajectory alignment algorithms. With their algorithm, the robot was less prone to stopping, instead resuming its task right after a human crossed its path. This technique can be used as a preprocessing step for other human-robot interaction, like action recognition and gesture detection. Shah says this algorithm will be key in enabling robots to respond to human movement and behavior patterns, not just in factories, but in homes as well. Shah said, “This technique could apply to any environment where humans exhibit typical patterns of behavior. The key is that the [robotic] system can observe patterns that occur over and over so that it can learn something about human behavior. This is all in the vein of work of the robot better understand aspects of human motion, to be able to collaborate with us better.”
|A team of biomedical engineering professors from the University of Houston published a study in eNeuro last week showing a brain to computer interface can actually sense when its user is anticipating a reward through tracking interactions between single-neuron activities and information flowing through the neurons. This is called ‘local field potential,’ and the findings allow the development of a brain-computer interface that autonomously updates, self-improves, and learns about its subject without programming. This is an exciting implication for improvements in robotic prosthetics operating more naturally, allowing the prosthetic to sense what its user wants to do, like picking up an object. According to the University of Houston’s news release, Professor of Biomedical Engineering Joe Francis claims, “This will help prosthetics work the way the user wants them to. The brain-computer interface quickly interprets what you’re going to do and what you expect as far as whether the outcome will be good or bad.” He added that this drives scientists’ abilities to predict reward outcome from mid seventy to ninety-seven percent. Francis used implanted electrodes to examine spikes in brain activity during tasks to understand how the interactions are regulated by reward expectations. Even when a task called for no movement, just passive observation, the brain-computer interface was able to determine the intention based on the neural activity pattern that resembled movement.
Francis explained, “We assume the intention is in there, and we decode that information by an algorithm and have it control either a computer cursor, for example, or a robotic arm. … This is important because we are going to have to extract this information and brain activity out of people who cannot actually move, so this is our way of showing we can still get the information even if there is no movement. This examination of reward motivation in the primary motor cortex could be useful in developing an autonomously updating brain-machine interface.”
The potential for robots and automation is both exciting and concerning.