Affective Computing: Teaching Machines About Emotion 131
jbc writes "The L.A. Times is running a story about affective computing, a field in which researchers are programming computers to recognize human emotions through the use of such clues as facial expression, vocal tone, and blood pressure. Some hail it as the dawn of a new era in super-useful machines, while others warn about invasions of privacy."
What about clues like ... (Score:1, Funny)
I can just see the possibilities: (Score:3, Funny)
i'll show you emotion... (Score:2)
Oh just great (Score:2, Funny)
Dave.... (Score:1, Funny)
What are you doing dave? I can feel my mind going...I am the HAL 9000, my programmer was D r. C h a
Possibilities for anti-porn measures (Score:1)
One Scenario : (Score:1)
Poker (Score:2)
Re:Poker (Score:1)
Article w/o annoying ads (Score:1)
Here's the article [latimes.com] without the annoying popup or the god-awful DHTML ad flying across the screen.
The best use for this... (Score:2)
I come home from work and "Bambi" recognizes that I'm real tense and stressed out and knows what will help me unwind. Oh yeah, I'm liking this... ;)
Uh oh (Score:2)
May configure it to "pop up" the "needed sites"?
I would assume that it is clear... (Score:1)
I've learned so much from movies.
Im looking forward to (Score:1)
.
Blood pressure and drug factor (Score:1)
Clippie v2 (Score:4, Funny)
You>>grr
Clippie>> You are frusterated, would you like my help?
You>>arrrg
Clippie>> I sense you need help, I have migrated your document into the letter template I think you want to you.
You>> stop!
Clippie>> Oh, you are done with your letter? Since you are having trouble, I have taken the liberty of saving and printing you letter for you.
You>> &*^@*(&#$_#(%*&
Clippie>> I sense how difficult this is for you, relax as I help you through the end of the letter writing process. Place an envelope in the printer to print the envelope to send you letter, that's all you have to do, see how easy this is?
...
I can't wait...
-Pete
Re:Clippie v2 (Score:1)
i've sensed the need for a new license... (Score:1)
Re:Clippie v2 (Score:3, Funny)
Clippy>> "It looks like your writing a.... Oh, erm. Never mind". (*Clippy looks terrified and runs off-screen.*)
You>> "THAT'S RIGHT CLIPPY! RUN! RUN FOR YOUR LIFE!"
Hey, that could be fun.
If you're interested (Score:4, Informative)
Thanks for Doing LA Times' Job For Them (Score:3, Insightful)
I'd suggest reading AffectiveComputing by Rosalind Picard from MIT Press, her homepage is here [mit.edu] and interview on First Monday [firstmonday.dk] and the MIT homepage at MIT [mit.edu]
Thanks for posting this. For the LA Times' article, one gets the feeling that Movellan is leading a one-man renaissance in AI. Like most articles about far-future technologies, the article is heavy on the "gee-whiz" and "what will they think of next?" stuff and light on any sort of in-depth examination of the issues involved. First, I don't understand why the media (newspapers especially) don't take the time to do a thoughtful, in-depth story about non-time-critical issues like Affective Computing. Secondly, I wish that if they were going to do a half-assed job of it, they would at least cite other, more detailed sources of information so interested readers could learn more on their own. Yeah, I suppose someone can do a web search to find this out. And thank god for slashdot where the readers usually know more about the subject matter than the article authors. But it's common curtosy to cite important people in a scientific field. At least it is when writing a scientific paper -- why should the mass media be exempt from this little niceity? Suppose you were a researcher at MIT's program in this field and saw this article. Wouldn't you be kind of pissed off? The LA Times could have replaced that paragraph about the Golem with a paragraph about the MIT program.
I'm troubled by the slipshod coverage that science and technology gets in the mass media. Do the newspaper authors think we don't care to know the details? Do they themselves not care?
GMD
A good point, but... (Score:1)
On the other hand, most sysadmins and other geeks (including a large share of tech writers) can't write well enough to translate these issues for everyday readers--which more a matter of explaining in small chunks, rather than simply writing at a sixth grade level and ignoring anything more complex (though few of my editors have agreed).
This isn't to say there aren't papers doing a good job--letting reporters develop stories, sending them to training that increases knowledge, etc.--but they are the exception, not the rule.
affective? (Score:5, Funny)
This is useless .. mostly anyway (Score:1)
However, I'm certain that it would end up mostly being used to see what kind of ads and spam we respond to.
Or perhaps it will sense your fear when you file your tax returns online! ;)
Sure it has uses -- on the road (Score:2, Interesting)
Re:This is useless .. mostly anyway (Score:1)
Dating games, anyone? (Score:4, Funny)
Now, take a dating sim like Sakura Taisen [sakurataisen.com]. Not only do you have to choose the right response to the question "Does this dress make me look fat?", but your facial response can have other effects.
For some games, this can be cool. Imagine an RPG where the look on your face determines your character's mood - and your response can then be read as humorous, sarcastic, serious, threatening - who knows. It will put real role playing on the computer into a new light, because you're doing more than reacting with the game, you're interacting.
Then again, the look on my face when I play FPS's look Quake is usually the same one I get when I'm sitting on the toilet, so that might not be a good thing....
Re:Dating games, anyone? (Score:1)
Tho it does sound cool
Re:Dating games, anyone? (Score:2)
I just had visions of a dark future where geeks learn how to date from computers... and they get good at it!
What monstrosities will result from the union of geeks and cheerleaders?
Invasion of privacy... (Score:3, Funny)
Comp. B: How can you tell?
Just what we need, computers that gossip...
MIT Media Lab ... (Score:1)
Ironic (Score:2)
You want to get rid of these things? Stop linking to the sites that carry them.
New Virus Possibilities (Score:1)
Fully functional... (Score:3, Funny)
Tasha: You are fully functional, aren't you?
Data: Yes.
Tasha: How fully?
Data: I am programmed in multiple techniques of pleasure. (And can recognize your emotions, I'm the perfect man for you!)
Tasha: You jewel! That's exactly what I hoped.
Re:Fully functional... (Score:2)
It could still happen... There's always Star Trek 11!
It's been done (Score:2)
MIT Media Lab Affective Computing Home Page (Score:5, Informative)
http://affect.media.mit.edu/AC_affect.html [mit.edu], and description [mit.edu]
Sig: What Happened To The Censorware Project (censorware.org) [sethf.com]
Clippy: (Score:1)
[ finish typing ]
For the closing, AutoComplete suggests:
- Grudgingly
- Bitterly
- Hatefully
This would be too much fun
The dangers of technology (Score:1)
This will be interesting technology to watch. Privacy issues aside, this could be a giant leap for human communications.
My 'O' Face (Score:1)
Brundle-fly (Score:1)
Short-term results proved promising, but limitations in mid-80s quarantine technology ultimately resulted in several grotesque fly-like abominations, and ultimate termination of that line of research.
Pr0n (Score:1)
-- trb
what about... (Score:1, Funny)
An Exercise in Futility (Score:1)
Oh yeah (Score:1)
After all, why would I want it to process what I told it to when it could randomly offer me alcoholic beverages based on my facial expression?
What happens when... (Score:1)
Computer Emotion (Score:3, Informative)
Lola Cañamero's Emotion Page [herts.ac.uk]
Steve Allen's Home Page [bham.ac.uk]
At best these will have very poor reliability (Score:1, Insightful)
According to many clinical psychologists, when it comes to predicting what a patient will do next, (which of course is also based on feelings), the most reliable way to find out is just to ask them.
No big deal... (Score:2, Interesting)
First, let me state the obvious: There is a big difference between a
computer recognizing emotions and a computer having emotions. The
first problem is not hard to solve. It requires we identify a set of
features that can be used to recognize emotions ("phonemes of
emotional expression" from the article), and feed these features to
some sort of classifier. From a research standpoint, the interesting
part is finding the features that identify emotions. Once we find
those features "discovering" that a computer can recognize these
features is not surprising.
Second, there is some interesting problems in AI. Really! Knowledge
representation, vision, and language design are particular
interesting. But I get very, very angry at people who hype AI to way
beyond what it can do and/or do superficial projects like kismet (Rod
Brooks is good salesman, but he is not a scientist).
Re:No big deal... (Score:1)
--
Re:No big deal... (Score:1)
the only way we can hope to get close is the shotgun approach -- fund tens of thousands of researchers, some of which take advantage of this to build cool toys to play with or are total morons and waste the funds. but the important thing is they try thousands of different approaches and maybe the human race gets lucky with ONE guy who figures it out. and voila -- we get computers which can reason. and all that wasted funding has been worth it.
obviously we may never get a guy who figures it out in which case the money is wasted but its like a lottery ticket where you only have to get lucky once. and when you do its worth it.
hyping AI or selling systems like kismet is one way to get additinal funding for it. that alone and the payoffs if we get lucky make it worth it.
Just a year late.... (Score:3, Funny)
{insert daisy.mid here}
hmm (Score:2, Funny)
Horrible article (Score:1)
A computer that can recognize emotions, by any other name, is still a computer.
Oh Geez, It's 1969 all over again (Score:2)
"Turning On" will have a whole new meaning.
The MP3's will be great, if they can manage to foster the hippy utopia of a world with no RIAA.
Ah yes... but is Silicon Vally read to be the next Height Ashbury? Will Bill Gates disown his computer for letting it's cables get too long? Will Steve Jobs quit existentialism and realize "you don't need to be a weatherman to know which way the wind blows?"
And in the end, the love you take is equal to the love you make.
Re:Oh Geez, It's 1969 all over again (Score:1)
Re:Oh Geez, It's 1969 all over again (Score:2)
Man, I feel like Ozzy Osborne looks.
Re:Oh Geez, It's 1969 all over again (Score:2)
That would rock.
or at least be insanly scary.
Ooooh SHIIIIT!!!!!!!
The world CANNOT HANDLE that much Gainax-esque anime!
Hmm, how about Fractal generators? The new Lava Lamps?
Solution to annoying ads? (Score:1)
It's over now... (Score:1)
Practical Uses (Score:1)
Emotion is thought. (Score:3, Insightful)
The idea that consciousness could occur outside the context of emotion is a pernicious misconception. It arose from the combination of a greatly oversimplified view of thought and the legacy of dualism.
It is true that what we experience as "emotion" is a subtly different kind of cognition than what we experience as deliberate thought. It's more fundamental, and more closely tied to other physical systems. So I do agree that it makes a certain sort of sense to distinguish "thought" and "emotion". Ultimately, however, both are manifestations of the same fundamental brain activity. They are deeply related and are not in opposition.
We've been spectacularly bad at analysis of our own consciousness. History has shown that much of what we don't notice and so take completely for granted are fundamental and extremely difficult problems; while what we are very aware of and have concentrated upon have proven to be trivial. The predicate calculus, in this context, is trivial.
I've long railed against the cliche of the "unfeeling" thinking machine/being one sees in popular science fiction. Neither Spock nor Data would be able to carry on a meaningful conversation if their thought didn't exist within the context of emotion. The idea that a thinking machine could imitate human consciousness without including human emotion is absurd if examined carefully.
Be that as it may, "affective computing" is only a very minor addition to computing in the context of AI. It's just another form of data acquisition, albeit one that would no doubt be very useful for an AI. None of this stuff we hear about is even remotely close to actual AI; at best it's just "smarter" computing. Real AI will only be achieved when we are able to build (or more properly, "grow") very high-level complex adaptive systems aimed at complex human interaction.
Good website for them to research... (Score:2)
Nice to see (Score:3)
A former professor of mine was working with this (Score:1)
I had a professor named Piotr Gmytrasiewicz who once told us about research he was doing on this subject. It was always neat to be able to make statements like, "Emotions can be represented as an ordered quintuple."
Piotr's gone on to another university up north, according to my last web search.
realdolls (Score:1)
Then again, playing a p0rn0 in the background probably does the same thing....
Metrowerks debugger (Score:2)
When it senses that it has ticked you long enough, it knows it has to go. And it just does.
At that point, my emotion peaks, and suddenly, my pulse rate go down, as I wait for CodeWarrior to reopen my project.
Now, I can't say it's helping me much. But it sure is prudent enough, by quitting, to preserve my TiBook.
Warning: Sarcasim Ahead (Score:1)
A solution in search of a problem (Score:1)
THey're looking for an application for the detection of emotion in users. Plenty of people have come up with the idea that a computer should be able to detect when the user is frustrated and refine the interface for the user.
This neglects the fact that the user should not get frustrated in the first place!
Well, just take your Botox injection.... (Score:1)
Never worry about privacy issues again - only the fact that you injected Botulism into your face.
Affective computing (Score:1)
There are some steps which are harder than others.
The relatively easy thing to do is build up a representative map of the different emotions and how they relate (sadness negative of happiness for a simple example). It's then fairly easy to get your logical computer to reason about the emotions of someone (the user, say), and react accordingly in the way the program author sees fit.
The hardest thing to do is accurately elicit the emotions of the user. If a website is trying to elicit your emotions, what can it do? It can look at the links you click on - well, that won't get accurate results (unless your links say something like 'click here if you genuinely feel angry at the moment'), and you can measure the length of times between clicks (i.e. they are anxious if they aren't really giving a specific page the time it needs to be read). This, like with the links, is inaccurate because the reason they do something (clicking fast, clicking on specific links) could be completely orthogonal to their emotions. There would be some mileage in having extremely detailed metadata about content viewed, and examining the reactions, but the more subtle emotions would be nigh on impossible to accurately elicit and respond to properly.
The problem is the HCI. If you are manually eliciting emotions from people, that could affect the emotions themselves if it wasn't second nature for them to be open about them to some system. How do we reliably capture emotions, even with some spangly new devices? My guess is that the only way to do this is by analysis of pictures obtained from video feeds trained on someone's face. Though this too has problems - different people show emotions in different ways. Some can shut themselves off emotionally, or don't betray their emotions as much, rendering the situation problematic.
thenerd.
Contradiction? (Score:1)
Isn't thoughtfulness a distinctly human quality? The article seems to waffle between whether or not the ability to distinguish emotions implies the ability to have them. It may seem like a clear "no," but aren't we ourselves little more than a collection of relays and circuits, at least on the surface?
So many possible jokes... (Score:2, Funny)
A: I really don't have much experience to use as a basis, and
B: A large portion of the audience probably doesn't either.
You'll just have to make up your own joke this time, you won't be getting a Score:5, Funny from me...
what is this "love"? (Score:1)
Girls, and stuff (Score:1)
school.. (Score:1)
Sorry. You're depressed. You may be concealing a weapon with the intent to kill masses, so we won't let you in.
or angry, with the intent to kill one..
or who knows what people may think of.
-DrkShadow
Marvin (Score:2)
We call it "Voight-Kampff" for short (Score:1)
TWW
Oversensitive Computers (Score:1)
--
Faking it in AI (Score:2, Offtopic)
Like Eliza, systems that seem to have emotions generate responses from humans that cause them to be overestimated. Parry, which was developed in the 1960s along the lines of Eliza and simulated a dialog with a paranoid, was probably the first program to have "emotional state". So this isn't new.
Even something as simple as the Furby has that effect. (I'm not criticizing the Furby; I've met the designer, and he's just trying to make a toy kids like. He doesn't make any unreasonable claims for the toy.) It's a great way to get press coverage, because it yields good demos.
Dolls that fake emotions have been around for a while. The classic is Baby Think It Over [btio.com], the attention-demanding doll from hell used to convince teenagers not to get pregnant. Hasbro marketed, as My Real Baby [allianceforchildhood.net], a lower-cost (and less obnoxious) version designed by some of Rod Brooks' people from MIT.
And, of course, there are the Sims.
It doesn't take much internal state to fake emotions. It's typically just a few scalar values going up and down in response to inputs.
Can it play poker? (Score:1)
"Anger leads to hate..." (Score:2)
Smart Ass Computer (Score:1)
bug-compatible (Score:1)
See, this is what happens when you submit an reference implementation instead of a specification. Now to be fully compatible, any emotion perceiver has to have the same bugs as the original human version. And even if the human model gets fixed by its vendor, there's still an enormous installed base that we're going to be supporting for years.
Potential for Pornography (Score:1)
Why blood pressure? (Score:1)
If it can, whichever method is used, if it's cheap and accurate (it must be to detect emotions) then maybe we should already have it for home medical use...
Good for usability tests (Score:2)
GI-GO-WAA (Score:1)
real human emotions! (Score:1)
Share and Enjoy!
Louis Armstrong- What a wonderful world. (Score:1)
Which brings me to the question of why we are so willing to spend disposable income on artificial pets (tamaguchi) with artificial intelligence at the level of a common ant and now with artificial emotions? Living in artificial environments with artificial lighting and artificial plants watching artificial lives (tv and movies) or artificially doing sports (espn). Decorating ourselves with artificial fur and artificial colors.
Good grief, time to turn this thing off and go snorkeling. Conch salad, sushi, and God's own sunset, now that's the ticket.
Automated Happiness Officer (Score:1)
"Joe, your computer is reporting that you've been a little down for the past several weeks. Studies have proven that productivity slides when you're sad, so... I'm afraid we're going to have to let you go."
Anyone ever play the old role-playing game "Paranoia"? Will computers be our automated Happiness Officers?
The simple solution! (Score:1)
:( Sad
:> Devious Bastard
:p +1 Funny
:* Not really sure, this is a news for nerds site.
Now take this with the Aibo story... (Score:1)
cool...
unless of course you fear your robotic Aibo Doberman pinscher, and it chews out your neck.
Affective Computing (Score:1)
http://thedreaming.org/~quartz/201/
ugh... (Score:1)
Door prize (Score:2)
Re:word usage (Score:1)
I suggest you spend some time away from the web and go read some real books. This isn't a very good place to pick up proper grammar and usage.