The US military research agency will spend millions helping computers engage better with the real world – a necessary step en-route to creating genuinely sentient robots, and not just calculating machines.
“The absence of common sense prevents an intelligent system from understanding its world, communicating naturally with people, behaving reasonably in unforeseen situations, and learning from new experiences,” Dave Gunning, a program manager in DARPA’s Information Innovation Office, said.
“This absence is perhaps the most significant barrier between the narrowly focused AI applications we have today and the more general AI applications we would like to create in the future,” he added, as the agency announced its Machine Common Sense (MCS) program.
Rather than stuffing them with complex algorithms for every situation, researchers want computers to organically imitate what learning babies go through as they grow and adjust to their environment. To do so, AI will need to understand how physical objects perform in the real world, the spatial relationship between them, and to recognize the basic psychological motivation of the living things around them.
“During the first few years of life, humans acquire the fundamental building blocks of intelligence and common sense,” Gunning said. “Developmental psychologists have found ways to map these cognitive capabilities across the developmental stages of a human’s early life, providing researchers with a set of targets and a strategy to mimic for developing a new foundation for machine common sense.”
The approach to this will be twofold: the first is to create a computational model that is capable of learning like an infant, combining “natural language processing, deep learning, and other areas of AI research.”
The second is to create a repository of common sense knowledge. By scanning the internet, with help from researchers and crowdsourcing, the team plan to compile an ever-growing database of common human behavior, settings, and scenarios, which can then be combined with the AI learning mechanism.
At the initial stage the success of the program will be gauged by asking questions humans find insultingly simple, but most computers are unable to comprehend.
Here are a few samples from an AI test by a civilian group doing similar research – the Allen Institute for Artificial Intelligence, funded by Microsoft co-founder Paul Allen – which DARPA will use as its benchmark.
Which object is nonliving?
What would you typically find in a trash can?
What can fit better through a house door: a basketball or an elephant?
The civilian and military uses of this are glaringly obvious.
Humans can program a bot to never miss a shot or finish every Formula 1 race first, in a game. But as Tesla and other self-driving car researchers well know, performing even narrowly-defined functions in the real world, and avoiding the sort of lapses even the worst driver never commits, is a hard task.
Now, imagine a drone or a robot soldier that not only has perfect accuracy, but truly understands their environment, and we have caught up with the world of Terminator.
But those afraid of Skynet shouldn’t necessarily rush for their candle-lit bomb shelters: the biggest common sense AI project Cyc was started 34 years ago, and while it knows more than any encyclopedia, it still gets stumped by any non-standard question or situation.
Think your friends would be interested? Share this story!