N.Y. Times Magazine Chats With ALICE Bot Creator 238
aridg writes: "This week's New York Times Magazine has an article about Richard Wallace, the programmer of the ALICE AI chatbot that won first place in several competitions for realistic human-like conversation. Wallace sounds like a pretty unusual and interesting fellow; the article quotes an NYU prof both praising ALICE and saying to Wallace: '... I actively dislike you. I think you are a paranoid psycho.' A good read. [Usual NY Times registration disclaimers apply.]"
Re:More info here (Score:0, Informative)
Re:The link you want (Score:2, Informative)
I hate to be the one to break this to you... (Score:3, Informative)
Re:A.I. field is currently crippled, (Score:2, Informative)
Really? How do you know this? When is the last time you read a AI research paper in a journal? Would you care to enlighten us as to how serious AI is too anthropomorphic?
Or were you just talking about the hype surrounding AI which is independent of serious research in AI?
Please, we in the AI community would love to know... Otherwise, still spreading this hogwash that has been giving AI a bad name for the past fifty years.
For example, look at recent advances in NLP due to the shift towards statistical (empirical i.e. data-based, not linguistics-based) methods. For example, anaphora resolution is more-or-less a solved problem as of a few years ago. (Anaphora is the use of a linguistic unit, such as a pronoun, to refer back to another unit. Anaphora resolution is figuring out what is referred to. i.e. the meaning of "she" can be determined with over 95% accuracy in corpora where humans do not find ambiguity.)
Many people do not realize how many small incremental advance are being made using machine-based approaches and assume that all we do is run around making airplanes modelled after birds.