Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AI Programming Education Software Technology

AI Software Juggles Probabilities To Learn From Less Data (technologyreview.com) 49

moon_unit2 quotes a report from MIT Technology Review: You can, for instance, train a deep-learning algorithm to recognize a cat with a cat-fancier's level of expertise, but you'll need to feed it tens or even hundreds of thousands of images of felines, capturing a huge amount of variation in size, shape, texture, lighting, and orientation. It would be lot more efficient if, a bit like a person, an algorithm could develop an idea about what makes a cat a cat from fewer examples. A Boston-based startup called Gamalon has developed technology that lets computers do this in some situations, and it is releasing two products Tuesday based on the approach. Gamalon uses a technique that it calls Bayesian program synthesis to build algorithms capable of learning from fewer examples. Bayesian probability, named after the 18th century mathematician Thomas Bayes, provides a mathematical framework for refining predictions about the world based on experience. Gamalon's system uses probabilistic programming -- or code that deals in probabilities rather than specific variables -- to build a predictive model that explains a particular data set. From just a few examples, a probabilistic program can determine, for instance, that it's highly probable that cats have ears, whiskers, and tails. As further examples are provided, the code behind the model is rewritten, and the probabilities tweaked. This provides an efficient way to learn the salient knowledge from the data.
This discussion has been archived. No new comments can be posted.

AI Software Juggles Probabilities To Learn From Less Data

Comments Filter:
  • by Baron_Yam ( 643147 ) on Tuesday February 14, 2017 @05:48PM (#53869145)

    In terms of animal models (that we're sadly still not sophisticated enough to understand), I find dogs' ability to identify other animals interesting.

    My dog can tell on sight whether another animal is a dog or not. This is remarkable because dog vision is actually slightly worse than human vision, he can do it from upwind, and there is a LOT of variation in dog breeds.

    Perhaps he's just seeing 'animal on a leash held by a human', but there does seem to be a slight pause of observation before he decides whether or not to bark, and a lot of owners in my area don't have any respect for leash laws.

  • I expect an AI to learn how to calculate probabilities precisely. Juggling the numbers is what you do with a checkbook register.
    • I don't want to hand you a set of balls and see what you do then. You seem to be using a slightly different definition of juggling than the article is, and the detail that interests me is that it doesn't occur to you to figure out how they are using it before making your pronouncement.
      • I don't want to hand you a set of balls and see what you do then.

        I keep mine in my pants. I wear boxers so they can hang nice and loose.

        You seem to be using a slightly different definition of juggling than the article is, and the detail that interests me is that it doesn't occur to you to figure out how they are using it before making your pronouncement.

        WOOOSH!

  • Bayesian theorem (Score:4, Informative)

    by superwiz ( 655733 ) on Tuesday February 14, 2017 @05:59PM (#53869221) Journal
    Bayesian probability is named for the Bayes' Theorem (which is named after the namesake mathematician). But the Bayesian inference is called that because it relies specifically on applying Bayes' Theorem rather than any other of Thomas Bayes' work.
    • Re: (Score:3, Funny)

      by Anonymous Coward

      I did not know that prior to reading your comment. I will need to go adjust my beliefs based on this new information.

      • Snide aside, the Bayes theorem is rudimentary and foundational to probabilistic inference. If someone did want to learn about it more, looking up biography of Thomas Bayes would tell them much less than just looking up the (fairly trivial) theorem and seeing for themselves how it could be used for inference from probabilities.
  • by turkeydance ( 1266624 ) on Tuesday February 14, 2017 @06:00PM (#53869233)
    "...code that deals in probabilities rather than specific variables..."
  • by Anonymous Coward

    Please can we STOP with the AI shit. Every. Fucking. Day.

  • by Anonymous Coward

    Jezzzz, don't you guys work with this shit? We have a row of AI systems...no expense spared to say the least. I will fully admit they are working on great shit...cure for cancers...but these machines are soooo dumb. AI is artificial....at best. I put AI into the category of "plastic plants"...fake at best.

  • by 50000BTU_barbecue ( 588132 ) on Tuesday February 14, 2017 @06:25PM (#53869393) Journal

    We need some of your 1990s fuzzy logic hype over here!

  • by akakaak ( 512725 ) on Tuesday February 14, 2017 @06:35PM (#53869461)

    The comparison of "deep learning that needs tons of examples" vs "Bayesian programming that can learn from a few examples" is a false dichotomy. It all depends on how much structure you assume a priori versus how much structure you learn from the data.

    Typical neural net (deep learning) examples start with no structure and thus require lots of examples. Typical Bayesian net examples start with a lot of structure and thus require only a few examples.

    On the other hand, if you start with a highly pretrained net like Inception-v3, then your deep learning cat expert may not need as many examples to generalize. And if your Bayesian programming model starts out with very general, very simple "building blocks" then it may need a lot of examples to extract the predictable structure.

    A main difference is that if you want to start with a lot of structure built in, you will probably have to pretrain for the neural net, whereas you can "hand code" the knowledge in your Bayes net. And the structure in the Bayes net may be a lot more transparent and easily interpretable than in the neural net. On the other hand, you had better hope you picked the right structure to begin with or else you will be reasoning over possibilities that are all very wrong! Knowing that an image is 50 times more likely to be a cat than a dog is not very helpful if it is actually a penguin.

    • I was learning about constructing neural nets on my own. My first approach involves initializing a net with random connections and strengths. I then run data through it, selecting synapses that were involved in correct outputs to be somewhat preserved. I then randomly modify a few synapses. If there are more correct outputs on the data, then I keep the new net and then repeat the randomization process. Otherwise I revert back to the old net and repeat the randomization process on it.
  • "You can, for instance, train a deep-learning algorithm to recognize a cat with a cat-fancier's level of expertise"

    Bullshit. It sounds like they can train a system to recognise what probably is a cat-like animal, but a serious cat-fancier can give a reasoned and interesting description of the differences between two pedigree cats - which look to the layman as being both perfect and identical.
    Background: my wife breeds international competition-grade Maine Coon cats...I used to be bored to death at shows un

  • I was looking into the deep learning celery diet earlier today.

    Uber Buys a Mysterious Startup to Make Itself an AI Company [wired.com]

    Many smart people in deep stealth.

    Vicarious (company) [wikipedia.org]

    The company received 40 million dollars in its Series B round of funding. The round was led by such notables as Mark Zuckerberg, Elon Musk, Peter Thiel, Vinod Khosla, and Ashton Kutcher.

    When has Peter Thiel ever been wrong?

  • Big deal. (Score:4, Funny)

    by Drunkulus ( 920976 ) on Tuesday February 14, 2017 @07:33PM (#53869795)
    I have a machine learning system which can make high level decisions based on zero data. It has named itself Deep Trump.

The difficult we do today; the impossible takes a little longer.

Working...