The possibility of a self-taught Artificial Intelligence

0
3125
3d rendering robot learning or machine learning with education hud interface

Artificial intelligence seems to be everywhere, but what we are really witnessing is a supervised-learning revolution: We teach computers to see patterns, much as we teach children to read. But the future of A.I. depends on computer systems that learn on their own, without supervision, researchers say.

When a mother points to a dog and tells her baby, “Look at the doggy,” the child learns what to call the furry four-legged friends. That is supervised learning. But when that baby stands and stumbles, again and again, until she can walk, that is something else.

Computers are the same. Just as humans learn mostly through observation or trial and error, computers will have to go beyond supervised learning to reach the holy grail of human-level intelligence.

“We want to move from systems that require lots of human knowledge and human hand engineering” toward “increasingly more and more autonomous systems,” said David Cox, IBM Director of the MIT-IBM Watson AI Lab. Even if a supervised learning system read all the books in the world, he noted, it would still lack human-level intelligence because so much of our knowledge is never written down.

Supervised learning depends on annotated data: images, audio or text that is painstakingly labeled by hordes of workers. They circle people or outline bicycles on pictures of street traffic. The labeled data is fed to computer algorithms, teaching the algorithms what to look for. After ingesting millions of labeled images, the algorithms become expert at recognizing what they have been taught to see.

But supervised learning is constrained to relatively narrow domains defined largely by the training data.

“There is a limit to what you can apply supervised learning to today due to the fact that you need a lot of labeled data,” said Yann LeCun, one of the founders of the current artificial-intelligence revolution and a recipient of the Turing Award, the equivalent of a Nobel Prize in computer science, in 2018. He is vice president and chief A.I. scientist at Facebook.

Methods that do not rely on such precise human-provided supervision, while much less explored, have been eclipsed by the success of supervised learning and its many practical applications — from self-driving cars to language translation. But supervised learning still cannot do many things that are simple even for toddlers.

“It’s not going to be enough for human-level A.I.,” said Yoshua Bengio, who founded Mila, the Quebec AI Institute, and shared the Turing Award with Dr. LeCun and Geoffrey Hinton. “Humans don’t need that much supervision.”

Now, scientists at the forefront of artificial intelligence research have turned their attention back to less-supervised methods. “There’s self-supervised and other related ideas, like reconstructing the input after forcing the model to a compact representation, predicting the future of a video or masking part of the input and trying to reconstruct it,” said Samy Bengio, Yoshua’s brother and a research scientist at Google.

There is also reinforcement learning, with very limited supervision that does not rely on training data. Reinforcement learning in computer science, pioneered by Richard Sutton, now at the University of Alberta in Canada, is modeled after reward-driven learning in the brain: Think of a rat learning to push a lever to receive a pellet of food. The strategy has been developed to teach computer systems to take actions.

Set a goal, and a reinforcement learning system will work toward that goal through trial and error until it is consistently receiving a reward. Humans do this all the time. “Reinforcement is an obvious idea if you study psychology,” Dr. Sutton said.

A more inclusive term for the future of A.I., he said, is “predictive learning,” meaning systems that not only recognize patterns but also predict outcomes and choose a course of action. “Everybody agrees we need predictive learning, but we disagree about how to get there,” Dr. Sutton said. “Some people think we get there with extensions of supervised learning ideas; others think we get there with extensions of reinforcement learning ideas.”

Pieter Abbeel, who runs the Berkeley Robot Learning Lab in California, uses reinforcement-learning systems that compete against themselves to learn faster in a method called self-play. Identical simulated robots, for example, sumo wrestle each other and initially are not very good, but they quickly improve. “By playing against your own level or against yourself, you can see what variations help and gradually build up skill,” he said.

As powerful as reinforcement learning is, Dr. LeCun says he believes that other forms of machine learning are more critical to general intelligence.

“My money is on self-supervised learning,” he said, referring to computer systems that ingest huge amounts of unlabeled data and make sense of it all without supervision or reward. He is working on models that learn by observation, accumulating enough background knowledge that some sort of common sense can emerge.

“Imagine that you give the machine a piece of input, a video clip, for example, and ask it to predict what happens next,” Dr. LeCun said in his office at New York University, decorated with stills from the movie “2001: A Space Odyssey.” “For the machine to train itself to do this, it has to develop some representation of the data. It has to understand that there are objects that are animate and others that are inanimate. The inanimate objects have predictable trajectories, the other ones don’t.”

After a self-supervised computer system “watches” millions of YouTube videos, he said, it will distill some representation of the world from them. Then, when the system is asked to perform a particular task, it can draw on that representation — in other words, it can teach itself.

Dr. Cox at the MIT-IBM Watson AI Lab is working similarly, but combining more traditional forms of artificial intelligence with deep networks in what his lab calls neuro-symbolic A.I. The goal, he says, is to build A.I. systems that can acquire a baseline level of common-sense knowledge similar to that of humans.

“A huge fraction of what we do in our day-to-day jobs is constantly refining our mental models of the world and then using those mental models to solve problems,” he said. “That encapsulates an awful lot of what we’d like A.I. to do.”

Many people hope robots will eventually embody artificial intelligence and act freely in the world. But it will take more than supervised learning to get them there. Currently, robots can operate only in well-defined environments with little variation.

“Our working assumption is that if we build sufficiently general algorithms, then all we really have to do, once that’s done, is to put them in robots that are out there in the real world doing real things,” said Sergey Levine, an assistant professor at Berkeley, who runs the university’s Robotic A.I. & Learning Lab.

He is using a form of self-supervised learning in which robots explore their environment and build up the base knowledge that Dr. LeCun and Dr. Cox are talking about.
“They just play with their environment and learn,” Dr. Levine said of the lab’s robots. “The robot essentially imagines something that might happen and then tries to figure out how to make that happen.”

By doing so, the robots build up a body of knowledge that they can use in a new setting. Eventually, robots could be networked so that they share the knowledge that each acquires.

“A robot spends a few hours playing with a door, moving it this way and that, and it can open that one door,” Dr. Levine said. “We have six different robots, so if we have all of them playing with different kinds of doors, maybe then when we give one a new door, it will actually generalize to that new door because it has seen enough variety.”

Dr. Abbeel, a founder of Covariant, a company that builds A.I. robotics for industrial automation, said that eventually all of these methods were likely to be combined.
Could we build machines at some point that will be as intelligent as humans? “Of course; there’s no question,” Dr. LeCun said. “It’s a matter of time.”
new york times

LEAVE A REPLY

Please enter your comment!
Please enter your name here