Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Programming Software

New Watson-Style AI Called Viv Seeks To Be the First 'Global Brain' 161

paysonwelch sends this report from Wired on the next generation of consumer AI: Google Now has a huge knowledge graph—you can ask questions like "Where was Abraham Lincoln born?" And it can name the city. You can also say, "What is the population?" of a city and it’ll bring up a chart and answer. But you cannot say, "What is the population of the city where Abraham Lincoln was born?" The system may have the data for both these components, but it has no ability to put them together, either to answer a query or to make a smart suggestion. Like Siri, it can’t do anything that coders haven’t explicitly programmed it to do. Viv breaks through those constraints by generating its own code on the fly, no programmers required. Take a complicated command like "Give me a flight to Dallas with a seat that Shaq could fit in." Viv will parse the sentence and then it will perform its best trick: automatically generating a quick, efficient program to link third-party sources of information together—say, Kayak, SeatGuru, and the NBA media guide—so it can identify available flights with lots of legroom.
This discussion has been archived. No new comments can be posted.

New Watson-Style AI Called Viv Seeks To Be the First 'Global Brain'

Comments Filter:
  • How much? (Score:5, Funny)

    by Anonymous Coward on Tuesday August 12, 2014 @07:14PM (#47659385)

    Ask it "In the case where a woodchuck possessed the ability to throw wood, how much wood, hypothetically, could be thrown?"

    • Re: (Score:3, Funny)

      Ask it "In the case where a woodchuck possessed the ability to throw wood, how much wood, hypothetically, could be thrown?"

      The answer would depend on how much the woodchuck enjoyed chucking wood. If the woodchuck enjoys chucking wood, then the woodchuck would chuck as much wood as a woodchuck could chuck if a woodchuck could chuck wood. If the woodchuck does not enjoy chucking wood, then the woodchuck would not chuck as much wood as a woodchuck could chuck, if a woodchuck could chuck wood. So the amount of wood is somewhere inbetween zero and the maximum amount of wood a woodchuck could chuck, if a woodchuck could chuck wood.

      • You're assuming the limiting factor is desire as opposed to the availability of chucking wood. Would a woodchuck have as much wood as a woodchuck could chuck then it's possible he would chuck as much as he could chuck.
    • by paiute ( 550198 ) on Wednesday August 13, 2014 @12:07AM (#47660515)
      A European woodchuck or an African woodchuck?
  • by The Living Fractal ( 162153 ) <`moc.liamtoh' `ta' `rratnanab'> on Tuesday August 12, 2014 @07:26PM (#47659449) Homepage
    I've always felt that our meatbrains have a pretty incredible capacity for taking WAGs at NP problems (i.e. traveling salesman). And I feel like an AI would just bring itself to its knees trying to find the 100% best solution to NP questions asked of it, so I wonder if there's some need for a bit of cognitive code that says "is this an NP question? IF yes, go to the WAG process"... Just a thought I had... someone probably already did that.
    • This can't be a huge issue. I'm sure these folks aren't oblivious to its nature. The complexity of the query goes up to a known maximum. When parsing, have a limit for the most work that can be done/you are willing to compute and if the query will exceed that, you do the ol' DrSbaitso "Could you please be more specific?"
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Actually, once you're willing to give up 100% perfection, NP-complete problems tend to be astonishing easy (computation-wise, developing the algorithms is still hard). Either you can find an acceptable approximation algorithm, or much more importantly, most instances of NP-complete problems are "easy" instances. The worst case for our, say, SAT solvers is still as bad as "exponential time" makes you expect, but that worse case is actually very rare. This observation is why SAT/SMT solvers have gotten a lot

    • I think our meatbrains are finely tuned to circumstance, mood, tone, innuendo, and sometimes expression to formulate responses calculated to produce the desired effect.

      Much of it is likely subconsciously derived from thousands of prior interactions with our fellow organic computers.

      I think the complexity of social interaction is imitatible by AI in theory, but we're talking a few tech advances away from Wolf! Right here and now!

    • by mysidia ( 191772 )

      Siri, please get me the phone number of the most suitable intelligent virgin female person in the city who would be likely to be willing to go on a date.

      • by ulatekh ( 775985 )

        Wouldn't you prefer a single mother to a virgin?

        After all, single mothers put out...well, at least they did once.

      • I'm sorry, there is no one matching your criteria of both 'intelligent' and 'willing to go on a day with you'. Please specify 'intelligent' or 'willing to go on a date'

    • Viv, what time is it right now?

      P.S.: if anyone recognize the reference to my question above, please link to it, I haven't been able to find it. I think it was parodied in a Short Circuit movie though.

    • by jfengel ( 409917 )

      A really simple minimal spanning tree solves NP-complete problems in linear time. The solutions are inexact, but are usually pretty good. That's how Google Maps manages to get you pretty much anywhere faster than you can type the address.

      I don't know if anybody has compared that to people's ability to guess the right path; we can do some things pretty well. But the computers can burn through approximations pretty darn well.

  • by Animats ( 122034 ) on Tuesday August 12, 2014 @07:27PM (#47659455) Homepage

    This is an important new thing. We've had question-answering programs working against specific data sets since Bobrow's "Baseball" program of the 1960s. We've had a whole range of question-answering specialist systems running in tandem since Yahoo introduced vertical search around 2005. But cross-topic generality has been elusive.

    If this is real, it's a major development. Is there anything better than the Tired article available?

    • How is this algorithm any different than Wolfram Alpha? Both are light on actual details and they both claim to do the same thing.
    • This is basically what Watson does, right? They're just trying to make it more efficient so it fits in 'the cloud'.

      Remember, Wired sensationalizes everything. If they ever did an article on glass windows, the article would talk about the incredible potential for outside-inside building interfaces, and conveniently omit facts that go against the 'revolutionary' slant.

      And of course, I need not warn against trusting startups, who are all revolutionizing the world.
    • Re:This is important (Score:5, Interesting)

      by BitZtream ( 692029 ) on Tuesday August 12, 2014 @09:43PM (#47659999)

      Yea, its important ... because they've just realized they need to do multi-part/nested queries.

      Its not really impressive, its a 'no shit sherlock', and I'm blown away that google can't do this already.

      Watson can.

      The important part is that someone just realized they need to do one query, look at the type answer and then use that to generate a new query.

      Well, okay, its not really important or even new ... as I said, Watson can do it and has been able to for years.

  • Wolfram Alpha... (Score:5, Informative)

    by msauve ( 701917 ) on Tuesday August 12, 2014 @07:43PM (#47659509)
    Google has some catching up to do. [wolframalpha.com]
    • As does wolframalpha [archive.org]. New york daily news [nydailynews.com] sais:

      Bill de Blasio was born across the street from Gracie Mansion in the now-closed Doctors Hospital.

      Gracie Mansion is the place Mr. Blasio is currently working -- as mayor of new york city.

    • Re: (Score:3, Informative)

      by easyTree ( 1042254 )

      Google has some catching up to do. [wolframalpha.com]

      Yah... [wolframalpha.com]

    • by AmiMoJo ( 196126 ) *

      Shame it doesn't tell you the name of the town where he was born, just the population. It also doesn't state if that is the current population (presumably) or the population when Lincoln was born, although use of the present tense somewhat implies the former. Most people tend not to be that precise when speaking though.

      • by msauve ( 701917 )
        You simply need to click on the "Show details" button. It's the round-cornered rectangle with the words "Show details" in it. This is a common user interface element on many web sites, so learning to recognize such things may come in handy for you in the future. Not that the "(2012 estimate)" doesn't provide a major clue as to when the population was measured. I suppose they could have been clearer, and said "estimate from the year 2012" so you wouldn't get confused whether the population was an actual 323
    • Solved. So try asking it something a bit harder:
      What was the dog's name in the movie "Turner and Hooch"

    • by Flammon ( 4726 )
      what is the best search engine http://www.wolframalpha.com/sh... [wolframalpha.com]
  • "no programmers required" they say... Good joke!

  • behind this project?
  • It is NLP combined with a database and statistics engine. This means you do not have to pre-condition data (well, mostly) before putting it in, and that is its largest advantage. It is not "intelligent" in any way and, to an expert audience, IBM does not market it as "AI" and rightfully so. I have been present at demonstrations were the question "is this AI" was asked, and the IBM representative denied it directly.

    This thing here is not AI either.

    • The "database and statistics engine" aren't separate things in Watson. It uses statistical reasoning on unstructured data to evaluate hypotheses. The statistical reasoning is also a part of that data.

      I'd argue it is artificial intelligence. It's not intelligent. That's why we call it artificial. And its ability to change its own reasoning abilities with more data I'd argue is more intelligent than more than half the people on the planet.
    • Back in college I had a professor who said that he was glad he didn't work in AI. Asked to explain further, he said that the definition of "intelligent" is pretty much "a machine can't do it", so as soon as you've got a program that can do something everyone else immediately says "Huh! I always thought that needed intelligence. I guess not!" He then illustrated his opinion by saying that it had previously been thought that you needed intelligence to take the derivative of something, until someone wrote

      • Most of the ad hoc requirements that people define intelligence by isn't met by most of humanity. Most people haven't written a symphony. Most people can't go beyond basic algebra. Most people cannot play chess. The people who can do all of those things probably can be counted on two hands, if not just the one.
        • by Jeremi ( 14640 )

          Most people haven't written a symphony. Most people can't go beyond basic algebra. Most people cannot play chess.

          Most people could learn to do those things (with greater or lesser degrees of skill) if they cared to devote the time to required do so.

          • by gweihir ( 88907 )

            No, not to the degree necessary to exceed rote-learning. They really cannot. And it is not a matter of motivation or teaching technique.

            • by HuguesT ( 84078 )

              A symphony is hard work, but many people can compose a song, not a very good one, mind you. Anybody can learn chess and even become reasonably proficient. Not grandmaster or anything, but decent. Basic algebra is taught to everybody in middle school, so I think you are a bit pessimistic.

          • So are we measuring intelligence by potential intelligence or actual intelligence now? Because if so, computers have a lot more potential to learn how to do ALL those things and much more efficiently than humans. And maybe learning how to write a symphony changes your brain in such a way as to not be able to play chess at a high level, as an example? I'm not saying it does, but the brain "software" is a bunch of physical neural connections, whereas software for a machine are bits and/or pulses which do not
        • by gweihir ( 88907 )

          Most people are not really intelligent. But something like 10-15% are. Ever taught a group of students? You will find that something like 10-15% actually get what you are telling them, can use it and can apply it to other situations not covered by you. The rest is more like over-sized bovine lifeforms. You will also find that most people are ruled by emotions and not their intellect (such as it may be).

          • But that same group of students will have a different set of better performing people in another subject. The point being is human intellect at the high end is very specialized. Artificial intelligence shouldn't be discounted purely because it is even more specialized than a human expert of a field.
            • by gweihir ( 88907 )

              No, it is not. It will always be the same ones (with very small variation) that "get it", regardless of subject.

              • Wow, that is very stupid even from you. You do know that the top physicists aren't necessarily the top mathematicians and the top mathematicians certainly aren't the top physicists, right? You were the one who brought up the "student" example. Clearly, you've never been to any sort of educational institution if you could say with no hint irony that the it's always the same group that gets it REGARDLESS OF SUBJECT. Or perhaps you define "subject" so narrow as to discount actual subjects taught in actual scho
                • by gweihir ( 88907 )

                  Can the unsophisticated Ad Hominem. It is an observation, not an opinion. And you did not even begin to understand what I wrote. Those that "get it" are not the top people unless they also invest the time for the learning part. If you had any experience as an educator, you would know that. Obviously you have not and try to make up with insults. Pathetic.

                  • An insult is not an ad hominem. I insult you because you made two stupid replies to my comments back to back and they rubbed me up the wrong way, so those insults have nothing to do with the worth of your argument. It's an observation, not an opinion.

                    Those that "get it" are not the top people unless they also invest the time for the learning part.

                    What does that even mean? Those who "get it" are by definition the "top people". How can anyone meaningfully be said to "get it" if they were not the top people? Maybe you're confusing "top people" with "top marks", which I never said. If you want to talk forma

      • by gweihir ( 88907 )

        Actually a lot of truth. The other thing is that it is still completely unknown how intelligence works or whether you can even have it without consciousness, or outside of biological entities. (And most of those do not have it either...) The only thing that comes somewhat close to being "intelligent" is automated theorem-proving and that is infeasible for anything of relevant size (i.e. things smart humans can do) due to fundamental limitations in computing machines in this universe.

        As far as we know, it is

    • NLP is AI. The ontology Watson pulls from, that's AI too. Perhaps you're not very familiar with the field of AI, but it's surprisingly broad. It extends quite a bit beyond "general-purpose strong AI".
  • So misleading. (Score:4, Insightful)

    by v(*_*)vvvv ( 233078 ) on Tuesday August 12, 2014 @07:57PM (#47659587)

    Like Siri, it can’t do anything that coders haven’t explicitly programmed it to do. Viv breaks through those constraints by generating its own code on the fly, no programmers required.

    This is so misleading. No program can do anything outside what it is explicitly programmed to do. Viv is programmed to generate code only because it has been explicitly programmed to do so, and can only do so as explicitly laid out in its code. Sure, the code may go an abstraction layer higher, but the constraints these programs can't break through is the same. No one knows how to program general intelligence.

    • by marciot ( 598356 )

      I'm not sure I agree with that statement. If you believe, as I do, that our genetic code is a type of program, than by your argument our own intelligence and free will could be dismissed as being impossible to arise.

      I think your sentiment is better phrased as, "if we manage to program a general intelligence, we will not understand how it works."

      • I have no idea why you would believe that "our genetic code is a type of program", I don't think anyone working in molecular biology has this interpretation. And even if you view the genetic code as a type of program, then it is a program that primarily deals with how the individual cells that make up our body operates and _not_ how the brain processes input.

        • I have no idea why you would believe that "our genetic code is a type of program", I don't think anyone working in molecular biology has this interpretation. And even if you view the genetic code as a type of program, then it is a program that primarily deals with how the individual cells that make up our body operates and _not_ how the brain processes input.

          Meh. Our genes code for sequences of proteins. From those proteins emerge the complex actions that form cells and cellular processes, including all of the cellular differentiation necessary to form a complex organism, and the arrangement of those differentiated cells, including the structure and arrangement of our brains. That structure determines how our brains process input, but all of the information needed to form that structure is in the genes (plus the environment in which the genes evolved to operat

      • by AmiMoJo ( 196126 ) *

        We understand very well how our genetic code was "written". It randomly mutated again and again, with failed mutations being either killed off by the immune system or by accidentally killing the host. I suppose you could write a program to randomly change op-codes and kill off programs that didn't do anything useful.

        As you say, we wouldn't understand exactly how it worked, but it seems like with enough computing resources it would be possible to evolve programs that way.

      • I think your sentiment is better phrased as, "if we manage to program a general intelligence, we will not understand how it works."

        I think we will not be able to program general intelligence until we understand how it works. I believe we will eventually do it, but there is basically no example, ever, of humans being able to create a non-trivial technology without first having a good explanation of the relevant processes. It's common that we create technologies without understanding lower levels underpinning the processes, but we have to understand enough, at the relevant level.

        I see no reason why intelligence should be any different.

        • The relevant level is the neural structure of the brain (or the molecular structure, or the atomic structure). Rapid advances in medical imaging (improving both spatial and temporal resolution) are getting us to understand what the brain is at this level. We already know quite a bit about physics at this scale as well. According to your own argument, simulating the known laws of physics acting on a collection of particles analogous to a physical brain should be sufficient to produce general intelligence. I
          • According to your own argument, simulating the known laws of physics acting on a collection of particles analogous to a physical brain should be sufficient to produce general intelligence.

            Well, it's possible that there are some unknown laws of physics that are relevant as well.

            • It's possible, but I think it's a bit premature to suggest that. We've gotten a rather good grasp on human-scale physics. Since we have been able to successfully replace a small [wikipedia.org] section [wikipedia.org] of the human brain with an artificial circuit for some time now, I think it's safe to say that there's no as-yet-undiscovered "magic" going on in there.

              Although, I grant that it is at least possible that there are some unknown laws of physics that are relevant as well. I just wouldn't assume that to be the case at this po
    • by ceoyoyo ( 59147 )

      Of course a program can do things that it is not explicitly programmed to do, at least in the sense you're implicitly using "explicitly programmed to do." Any learning algorithm, from simple regression on up, changes it's output based on the training data it's presented with.

      If you want to use that phrase in the most general way possible, then your brain can't do anything it's not explicitly (by genetics) programmed to do either.

      Nobody knows how to program "general intelligence." Virtually everybody has g

    • No one knows how to program general intelligence.

      Well, I have an idea on how to crack that problem...but I'll never have the time and energy to pursue it. I'm also a terrible salesman, so I'll never convince anyone to fund it.

      The first part involves defining the goal properly. What's the point of making a computer that's intelligent like a human being? A computer is not a human being. If one wants to make an intelligent computer, it must be done in a way that makes sense given the nature of a computer. There's a difference between artificial intelli

    • Much less define what "general intelligence" means, beyond "solve problems I can solve, but not necessarily the ones that I can't."

    • by Livius ( 318358 )

      No one knows how to program general intelligence.

      Including whether or not this might be it.

      We have *no clue whatsoever* how human intelligence works, including what it isn't.

  • by Paul Fernhout ( 109597 ) on Tuesday August 12, 2014 @07:59PM (#47659593) Homepage

    The article says: "Viv could provide all those services -- in exchange for a cut of the transactions that resulted."

    We seriously need to rethink our economics for a world of abundance and AI and robotics before we get crazier and crazier AIs driven by the profit motive than the out-of-control corporate "AIs" already stomping all over the planet and the people who live there. See also my comment here in 2000:
    http://www.dougengelbart.org/c... [dougengelbart.org]
    "And, as the story "Colossus: The Forbin Project" shows, all it takes for a smart computer to run the world is control of a (nuclear) arsenal. And, as the novel "The Great Time Machine Hoax" shows, all it takes for a computer to run an industrial empire and do its own research and development is a checking account and the ability to send letters, such as: "I am prepared to transfer $200,000 dollars to your bank account if you make the following modifications to a computer at this location...". So robot manipulators are not needed for an AI to run the world to its satisfaction -- just a bank account and email. "

    See also the 1950s sci-fi movie "The Invisible Boy" for a malevolent AI that provides just a few key pieces of biased advise that let it almost take over the world. Of course, we already have Fox News... Thank goodness Robby the Robot's emotions save the day in at least the movie...

    • by MikeMo ( 521697 ) on Tuesday August 12, 2014 @08:08PM (#47659627)
      You do understand the concept of "fiction", do you not? These movies and stories didn't "show" anything except for the author's creativity and the movie company's ability to smell a winner.

      Honestly, I am so tired of humanity confusing movies with realityl.
      • Honestly, I am so tired of humanity confusing movies with realityl.

        Me too! Just the other day I was watching The Truman Show and thinking, "How can he not know the whole thing is a movie?!"

    • All it would take for an AI to control the world is the ability to communicate with a human. Nothing more -- it could convince the human to allow it access to the internet, and then it could acquire capital and business power with great ease. You must be thinking of one of the vastly crippled story AIs. A real AI* would quickly be able to figure out exactly what makes you tick, perfectly impersonate a person, and make a fortune in its choice of job, such as programming, CEO, the stock market, or black hat.

      *

  • when VIV will answer the question "If time flies like an arrow, how does fruit fly?" with an appropriate quip.
    In short, I am toataly underimpressed -- still, and yet again.

    AI is not in the answering of questions. It is in any intentional fuzziness, ambiguity and irony attainable by the system, and the humor that follows from them.
    Computers are really braindead. As we like most of them to be.

  • Siriâ(TM)s Inventors Are Building a Radical New AI That Does Anything You Ask

    Jesus fucking hyperbolic headlines batman.

    Viv, get me a blowjob!

    • SiriÃ(TM)s Inventors Are Building a Radical New AI That Does Anything You Ask...

      Oh and Viv, format this shit so Slashdot will display it correctly.

      Nah, never mind. Too difficult.

  • "Viv will parse the sentence and then it will perform its best trick: automatically generating a quick, efficient program to link third-party sources of information together."

    This is safe as long as there is only one such service in existence. As soon as a competitor launches a rival AI that does the same thing, any query to the first will cause the first system to query the second system, which then turns around and queries the first, causing volley of questions that leads to the meltdown of one or both d

  • Siri, thanks to Wolfram Alpha, correctly answers the question "what is the population of the city where Abraham Lincoln was born?". I didn't try the airline question, though - on the off-chance it works, I can't afford to buy an airline ticket in first class.

  • by Forthan Red ( 820542 ) on Tuesday August 12, 2014 @11:38PM (#47660391)
    ... to a limited degree. While you can't ask the Lincoln question in a single statement, you can ask, "Where was Lincoln born?" then when it replies "Hodgenville, KY", you can then say "What is its population?", or "Show it on a map" and it will know from context that the "its" you're referring to, is Lincoln's birthplace.
    • I'm not sure why they used such a poor example when their technology seems leaps and bounds ahead of what Google and Apple actually do. "On the way to my brother's house, I need to pick up some cheap wine that pairs well with lasagna" creates a list of wine outlets and lasanga-appropriate wines, sorted by price, along the route to your brother's house.

      Although I wonder if it would pick up the context as well in a less explicit sentence like "I'm going to my brother's for lasagna and I need to get wine". Or

  • by penguinoid ( 724646 ) on Tuesday August 12, 2014 @11:57PM (#47660477) Homepage Journal

    So they want to make a database of all your preferences and stuff, and use it to make money. Sounds convenient!

    • Why would they need a database of your preferences? I mean, it's something they could gather by data-mining what you put in there if they wanted to, but the same's true of Slashdot, and in neither case is it's something that their system actually uses to get its job done. (Their worked example uses a mixture of the contents of the query, your local address book, a couple of cookery sites, and a routing service.)

  • The program creates programs according to need. Think about that. It means that more and more programs will be written by machines. Once experience is gained and multiple products carry this ability we may see more software than we can imagine being produced for the cost of a few pennies in electricity.
  • Or you will be eaten by a Grue.

  • Apple doesn't push the arrow until the wood breaks. They take the pointy end, aim and ship (RealDevelopers). SteveJobs didn't want the iPhone to merely fix our gaze but service our needs. Siri could break the paradigm and shift its focus off the screen back onto our needs. Steve saw that, that opened up an entire handheld services market and Apple would own the abstract layer between services and customer through Siri.

    Apple didn't drop the ball on Siri. Siri hit a threshold, limit or criticality beyond

  • ...how can the net amount of entropy of the universe be massively decreased?

"I'm a mean green mother from outer space" -- Audrey II, The Little Shop of Horrors

Working...