Researchers at MIT led by Josh Tenenbaum hypothesize that our brains have what you might call an intuitive physics engine: The information that we are able to gather through our senses is imprecise and noisy, but we nonetheless make an inference about what we think will probably happen, so we can get out of the way or rush to keep a bag of rice from falling over or cover our ears. Such a “noisy Newtonian” system involves probabilistic understandings and can fail. Consider this image of rocks stacked in precarious formations.

Based on most of your experience, your brain tells you that it’s not possible for them to remain standing. Yet there they are. (This is very similar to the physics engines inside videogames like Grand Theft Auto that simulate a player’s interactions with objects in their 3-D worlds.)

For decades, artificial intelligence with common sense has been one of the most difficult research challenges in the field—artificial intelligence that “understands” the function of things in the real world and the relationship between them and is thus able to infer intent, causality, and meaning. AI has made astonishing advances over the years, but the bulk of AI currently deployed is based on statistical machine learning that takes tons of training data, such as images on Google, to build a statistical model. The data are tagged by humans with labels such as “cat” or “dog”, and a machine’s neural network is exposed to all of the images until it is able to guess what the image is as accurately as a human being.

Joi Ito is an Ideas contributor for WIRED, and his association with the magazine goes back to its inception. Ito has been recognized for his work as an activist, entrepreneur, venture capitalist and advocate of emergent democracy, privacy and internet freedom. He is coauthor with Jeff Howe of Whiplash: How to Survive Our Faster Future. As director of the MIT Media Lab and a professor of the practice of media arts and sciences, he is currently exploring how radical new approaches to science and technology can transform society in substantial and positive ways. His biggest challenge now, however, is keeping up with his nine-month-old child.
One of the things that such statistical models lack is any understanding of what the objects are—for example that dogs are animals or that they sometimes chase cars. For this reason, these systems require huge amounts of data to build accurate models, because they are doing something more akin to pattern recognition than understanding what’s going on in an image. It’s a brute force approach to “learning” that has become feasible with the faster computers and vast datasets that are now available.

It’s also quite different from how children learn. Tenenbaum often shows a video by Felix Warneken, Frances Chen, and Michael Tomasello, of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, of a small child watching an adult walk repeatedly into a closet door, clearly wanting to get inside but failing to open it properly. After just a few attempts, the child pulls the door open, allowing the adult to walk through. What seems cute but obvious for humans to do—to see just a few examples and come up with a solution—is in fact very difficult for a computer to do. The child opening the door for the adult instinctively understands the physics of the situation: There is a door, it has hinges, it can be pulled open, the adult trying to get inside the closet cannot simply walk through it. In addition to the physics the child understands, he is able to guess after a few attempts that the adult has an intention to go through the door but is failing.

This requires an understanding that human beings have plans and intentions and might want or need help to accomplish them. The capacity to learn a complex concept and also learn the specific conditions under which that concept is realized is an area where children exhibit natural, unsupervised mastery.

Infants like my own 9-month-old learn through interacting with the real world, which appears to be training various intuitive engines or simulators inside of her brain. One is a physics engine (to use Tenenbaum’s term) that learns to understand—through piling up building blocks, knocking over cups, and falling off of chairs—how gravity, friction, and other Newtonian laws manifest in our lives and set parameters on what we can do.

Sourced through Scoop.it from: www.wired.com