Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Programming IT Science Technology

Loebner Talks AI 107

Mighty Squirrel writes "This is a fascinating interivew with Hugh Loebner, the academic who has arguably done more to promote the development of artifical intelligence than anyone else. He founded the Loebner prize in 1990 to promote the development of artificial intelligence by asking developers to create a machine which passes the Turing Test — meaning it responds in a way indistinguishable from a human. The latest running of the contest is this weekend and this article shows what an interesting and colourful character Loebner is."
This discussion has been archived. No new comments can be posted.

Loebner Talks AI

Comments Filter:
  • Arguably? (Score:5, Interesting)

    by mangu ( 126918 ) on Saturday October 11, 2008 @06:06PM (#25341911)

    the academic who has arguably done more to promote the development of artificial intelligence than anyone else

    Well, I suppose someone could argue that. But it would be a pretty weak argument. I could cite at least a hundred researchers who are better known and have done more important contributions. to the field of AI.

  • by Louis Savain ( 65843 ) on Saturday October 11, 2008 @06:51PM (#25342143) Homepage

    Intelligence, artificial or otherwise, is what psychologists define it to be. It has to do with things like classical and operant conditioning, learning, pattern recognition, anticipation, short and long term memory retention, associative memory, aversive and appetitive behavior, adaptation, etc. This is the reason that the Turing test and symbolic AI have nothing to do with intelligence: they are not conserned with any of those things.

    That being said, I doubt that anything interesting or useful can be learned from writing those Loebner/Turing programs. It would be much more interesting to write programs that learn to play a good game of GO. I suggest that Loebner change his competition accordingly.

  • by geomobile ( 1312099 ) on Saturday October 11, 2008 @06:53PM (#25342155) Homepage
    Isn't just that the point of the Turing test: if you can fool people into believing you're intelligent, then you are intelligent? No way to tell if something is intelligent apart from its behavior. Lab conditions using language (string manipulation) were probably chosen because the amount of context and the variety of problems that could be encountered during the test are only solvable by humans (until now), and only using that which we call intelligence.

    I know this was true for playing chess at a decent level twenty years ago. Ok, probably we'll never accept the intelligence of anything the inner workings of which we understand completely.

    But that is not a problem of the test, it is a problem of our willingness to define intelligence. So, no proof of artificial intelligence possible for us, ever. News at eleven.

    At least the dog argument doesn't hold. The Turing test does not claim to be able to define all types of intelligence. Also: future dog-driven string manipulation still possible.
  • by retchdog ( 1319261 ) on Saturday October 11, 2008 @07:26PM (#25342353) Journal

    It would be interesting, except that any reasonable person would conclude that either 1) you were lying; or 2) you (the machine) were following a ruleset which would break after just a few more posts.

    AI would be better off focusing on dogs. It's actually better off focusing on practical energy-minimization and heuristic search methods, which would be comparable in intelligence to say an amoeba. Going for human-level intelligence right now, is like getting started in math by proving the Riemann hypothesis.

  • by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Saturday October 11, 2008 @07:40PM (#25342413) Homepage

    Yes, it tests for one particular form of AI, but that form would be extremely useful to have if achieved.

    The problem I have with the test is that it isn't about creating an AI, but about creating something that behaves like human. Most AI, even if highly intelligent, will never behave anything like a human, simply because its something vastly different and build for very different tasks. Now one could of course try to teach that AI to fool a human, but then its simply a game of how good it can cheat, not something that tells you much about its intelligence.

    I prefer things like the DARPA Grand Challenge where the goal isn't to create something that behaves like a human, but simply something that gets the given task done. That way you can slowly raise the bar instead of setting a goal where you don't even know if there is a point in chasing it. Turning test feels to much like a challenge to fly by sticking wings to your arms, you might be able to do that one day, but the aviation industry doesn't really care, jumbo jets are flying fine even without flapping their wings.

  • Re:Arguably? (Score:3, Interesting)

    by Workaphobia ( 931620 ) on Sunday October 12, 2008 @05:11AM (#25344387) Journal

    Amen. The whole point of the Turing Test was to express a functionalist viewpoint of the world, that two blackboxes with the same interface are morally and philosophically substitutable. And this whole media-fueled notion of the Turing Test as a milestone on the road to machine supremacy just muddles the point.

  • by bencoder ( 1197139 ) on Sunday October 12, 2008 @01:00PM (#25346079)
    That's true, but it doesnt deny that the Turing test is capable of detecting a real AI over one that just "fakes" it. The main problem here is that when people are looking for "intelligence" they don't really know what they are looking for. Turing test offers one solution to this problem(can it talk like a human). It is certainly not the be-all and end-all of intelligence/thought tests and I don't think anyone ever stated it was.

    Unfortunatly, no-one can come up with a suitable, testable, definition of intelligence. Turing's answer is to say that the problem is meaningless because there can be no definition of such an abstract concept, so he devised the test as a way of showing that, in a very behaviourist manner, it doesnt matter. If it can act intelligent according to a human without bias(hence the seperation between human and machine) then there's nothing that can be said to deny that the machine is intelligent. No doubt people will claim it's not, but those people will always have bias against a machine.
  • Re:Great Book on AI (Score:3, Interesting)

    by moore.dustin ( 942289 ) on Sunday October 12, 2008 @02:44PM (#25346689) Homepage
    Like I said, "His work is about understanding how the brain really works so that you can make truly intelligent machines." Your notion is dismissable as we could not fathom what we have no clue about, that being your alien. The best we can do on earth is to look at how the most intelligent species we know, humans, actually develop and groom their intelligence.

    Intelligence defined is one thing, intelligence understood biologically is something else altogether.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...