- •Cloning
- •Клонування
- •Contents
- •Introduction
- •Part 1. Theoretical basis of cloning
- •Definition of cloning
- •Cloning techniques
- •Examples of cloning animals
- •Human cloning
- •Part 2. Practical application of cloning
- •2.1. Reasons for cloning
- •2.2. Risks of cloning
- •2.3. Computer cloning technologies
- •Part 3. Issues relating to cloning
- •3.2. Cloning Myths
- •Conclusion
- •References
2.3. Computer cloning technologies
Disney develops "face cloning" technique for animatronics
The “uncanny valley” is one of the frustrating paradoxes of robotics. Every year, roboticists make humanoid robots that more accurately imitate human beings, but it turns out that the better the imitation, the creepier the end result. It’s that strange, hair-raising sensation one gets when visiting the Hall of Presidents at Disneyland. True, George Washington and Abraham Lincoln look very lifelike, but there’s always something wrong that you can’t quite describe. In the hope of bridging this valley, a Disney Research team in Zurich, Switzerland, has invented a new robot-making technique dubbed “face cloning.” This technique combines 3D digital scanning and advanced silicone skins to give animatronic robots more realistic facial expressions. [17]
Binghamton University computer scientist Lijun Yin thinks that using a computer should be a comfortable and intuitive experience, like talking to a friend. As anyone who has ever yelled "Why did you go and do that?" at their PC or Mac will know, however, using a computer is currently sometimes more like talking to an overly-literal government bureaucrat who just doesn't get you. Thanks to Yin's work with things like emotion recognition, however, that might be on its way to becoming a thing of the past.
Most of Yin's research in this area centers around the field of computer vision – improving the ways in which computers gather data with their webcams. More specifically, he's interested in getting computers to "see" their users, and to be able to guess what they want by looking at them.
Already, one of his graduate students has given a PowerPoint presentation, in which content on the slides was highlighted via eye-tracking software that monitored the student's face
A potentially more revolutionary area of his work, however, involves getting computers to distinguish between human emotions. By obtaining 3D scans of the faces of 100 subjects, Yin and Binghamton psychologist Peter Gerhardstein have created a digital database that includes 2,500 facial expressions. The emotions conveyed by these expressions all fall under the headings of anger, disgust, fear, joy, sadness, and surprise. By mapping the differences in the subjects' faces from emotion to emotion, he is working on creating algorithms that can visually identify not only the six main emotions, but even subtle variations between them.The database is available free of charge to the nonprofit research community.
Besides its use in avoiding squabbles between humans and computers, Yin hopes that his software could be used for telling when medical patients with communication problems are in pain. He also believes it could be used for lie detection, and to teach autistic children how to recognize the emotions of other people.
This is by no means the first foray into computer emotion recognition. Researchers at Cambridge University have developed a facial-expression-reading driving assistance robot, while Unilever has demonstrated a machine that dispenses free ice cream to people who smile at it. [18]
