The phenomenon has been described anecdotally for years, but how and
why this happens is still a subject of debate in robotics, computer
graphics and neuroscience. Now an international team of researchers, led
by Ayse Pinar Saygin of the University of California, San Diego, has
taken a peek inside the brains of people viewing videos of an uncanny
android (compared to videos of a human and a robot-looking robot).
Published in the Oxford University Press journal Social Cognitive and Affective Neuroscience, the functional MRI study suggests that what may be going on is due to a perceptual mismatch between appearance and motion.
The term "uncanny valley" refers to an artificial agent's drop in
likeability when it becomes too humanlike. People respond positively to
an agent that shares some characteristics with humans -- think dolls,
cartoon animals, R2D2. As the agent becomes more human-like, it becomes
more likeable. But at some point that upward trajectory stops and
instead the agent is perceived as strange and disconcerting. Many
viewers, for example, find the characters in the animated film "Polar
Express" to be off-putting. And most modern androids, including the
Japanese Repliee Q2 used in the study here, are also thought to fall
into the uncanny valley.
Saygin and her colleagues set out to discover if what they call the
"action perception system" in the human brain is tuned more to human
appearance or human motion, with the general goal, they write, "of
identifying the functional properties of brain systems that allow us to
understand others' body movements and actions."
They tested 20 subjects aged 20 to 36 who had no experience working
with robots and hadn't spent time in Japan, where there's potentially
more cultural exposure to and acceptance of androids, or even had
friends or family from Japan.
The subjects were shown 12 videos of Repliee Q2 performing such
ordinary actions as waving, nodding, taking a drink of water and picking
up a piece of paper from a table. They were also shown videos of the
same actions performed by the human on whom the android was modeled and
by a stripped version of the android -- skinned to its underlying metal
joints and wiring, revealing its mechanics until it could no longer be
mistaken for a human. That is, they set up three conditions: a human
with biological appearance and movement; a robot with mechanical
appearance and mechanical motion; and a human-seeming agent with the
exact same mechanical movement as the robot.
At the start of the experiment, the subjects were shown each of the
videos outside the fMRI scanner and were informed about which was a
robot and which human.
The biggest difference in brain response the researchers noticed was
during the android condition -- in the parietal cortex, on both sides of
the brain, specifically in the areas that connect the part of the
brain's visual cortex that processes bodily movements with the section
of the motor cortex thought to contain mirror neurons (neurons also
known as "monkey-see, monkey-do neurons" or "empathy neurons").
According to their interpretation of the fMRI results, the
researchers say they saw, in essence, evidence of mismatch. The brain
"lit up" when the human-like appearance of the android and its robotic
motion "didn't compute."
"The brain doesn't seem tuned to care about either biological
appearance or biological motion per se," said Saygin, an assistant
professor of cognitive science at UC San Diego and alumna of the same
department. "What it seems to be doing is looking for its expectations
to be met -- for appearance and motion to be congruent."
In other words, if it looks human and moves likes a human, we are OK
with that. If it looks like a robot and acts like a robot, we are OK
with that, too; our brains have no difficulty processing the
information. The trouble arises when -- contrary to a lifetime of
expectations -- appearance and motion are at odds.
"As human-like artificial agents become more commonplace, perhaps our
perceptual systems will be re-tuned to accommodate these new social
partners," the researchers write. "Or perhaps, we will decide it is not a
good idea to make them so closely in our image after all."
Saygin thinks it's "not so crazy to suggest we brain-test-drive
robots or animated characters before spending millions of dollars on
their development."
It's not too practical, though, to do these test-drives in expensive
and hard-to-come-by fMRI scanners. So Saygin and her students are
currently on the hunt for an analogous EEG signal. EEG technology is
cheap enough that the electrode caps are being developed for home use.
The research was funded by the Kavli Institute for Brain and Mind at
UC San Diego. Saygin was additionally supported by the California
Institute of Telecommunication and Information Technology (Calit2) at
UCSD.
Saygin's coauthors are Thierry Chaminade of Mediterranean Institute
for Cognitive Neuroscience, France; Hiroshi Ishiguro of Osaka University
and ATR, Japan; Jon Driver of University College London; and Chris
Firth of University of Aarhus, Denmark.
0 comments:
Post a Comment