Researchers at the University of Maryland have developed a method to combine perception and motor commands using the hyperdimensional computing theory, which could fundamentally alter and improve how robots translate what they sense into what they do through artificial intelligence (AI).
Boston Red Sox star outfielder Mookie Betts steps up to the plate on a 3-2 count, studies the pitcher and the situation, gets the go-ahead from third base, tracks the ball’s release, swings… and gets a single up the middle. Just another trip to the plate for the reigning American League MVP.
Betts has honed natural reflexes, years of experience, knowledge of the pitcher’s tendencies, and an understanding of the trajectories of various pitches. What he sees, hears, and feels seamlessly combines with his brain and muscle memory to time the swing that produces the hit. Now apply the knowledge Betts gleaned after years of experience and ask if a robot can do the same thing?
The answer is no, not today.
The robot would need to use a linkage system to slowly coordinate data from its sensors with its motor capabilities. And its memory is horrible.
But that all may change with a method to combine perception and motor commands using the hyperdimensional computing theory, which could fundamentally alter and improve the basic artificial intelligence (AI) task of sensorimotor representation — how robots translate what they sense into what they do.
“Learning Sensorimotor Control with Neuromorphic Sensors: Toward Hyperdimensional Active Perception” is a paper written by University of Maryland computer science Ph.D. students Anton Mitrokhin and Peter Sutor, Jr.; Cornelia Fermüller, an associate research scientist with the University of Maryland Institute for Advanced Computer Studies; and Computer Science Professor Yiannis Aloimonos. Mitrokhin and Sutor are advised by Aloimonos.
Integration is the most important challenge facing the robotics field. A robot’s sensors and the actuators that move it are separate systems, linked together by a central learning mechanism that infers a needed action given sensor data, or vice versa.
The cumbersome three-part AI system – each part speaking its own language – is a slow way to get robots to accomplish sensorimotor tasks. The next step in robotics will be to integrate a robot’s perceptions with its motor capabilities. This fusion, known as “active perception,” would provide a more efficient and faster way for the robot to complete tasks.
In the new computing theory, a robot’s operating system would be based on hyperdimensional binary vectors (HBVs), which exist in a sparse and extremely high-dimensional space. HBVs can represent disparate discrete things – for example, a single image, a concept, a sound or an instruction; sequences made up of discrete things; and groupings of discrete things and sequences. They can account for all these types of information in a meaningfully constructed way, binding each modality together in long vectors of 1s and 0s with equal dimension. In this system, action possibilities, sensory input and other information occupy the same space, are in the same language, and are fused, creating a kind of memory for the robot.
A hyperdimensional framework can turn any sequence of “instants” into new HBVs, and group existing HBVs together, all in the same vector length. This is a natural way to create semantically significant and informed “memories.” The encoding of more and more information in turn leads to “history” vectors and the ability to remember. Signals become vectors, indexing translates to memory, and learning happens through clustering.
The robot’s memories of what it has sensed and done in the past could lead it to expect future perception and influence its future actions. This active perception would enable the robot to become more autonomous and better able to complete tasks.
“An active perceiver knows why it wishes to sense, then chooses what to perceive, and determines how, when and where to achieve the perception,” Aloimonos said. “It selects and fixates on scenes, moments in time, and episodes. Then it aligns its mechanisms, sensors, and other components to act on what it wants to see, and selects viewpoints from which to best capture what it intends. Our hyperdimensional framework can address each of these goals.”
Applications of the Maryland research could extend far beyond robotics. The ultimate goal is to be able to do AI itself in a fundamentally different way: From concepts to signals to language. Hyperdimensional computing could provide a faster and more efficient alternative model to the iterative neural net and deep learning AI methods currently used in computing applications such as data mining, visual recognition and translating images to text.
“Neural network-based AI methods are big and slow, because they are not able to remember,” Mitrokhin said. “Our hyperdimensional theory method can create memories, which will require a lot less computation, and should make such tasks much faster and more efficient.”