Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Programming AI Technology

Deep Learning Pioneer On the Next Generation of Hardware For Neural Networks 45

An anonymous reader writes: While many recognize Yann LeCun as the father of convolutional neural networks, the momentum of which has ignited artificial intelligence at companies like Google, Facebook and beyond, LeCun has not been strictly rooted in algorithms. Like others who have developed completely new approaches to computing, he has an extensive background in hardware, specifically chip design and this recognition of specialization of hardware, movement of data around complex problems, and ultimately core performance, has proven handy. He talks in depth this week about why FPGAs are coming onto the scene as companies like Google and Facebook seek a move away from "proprietary hardware" and look to "programmable devices" to do things like, oh, say, pick out a single face of one's choosing from an 800,000 strong population in under five seconds.
This discussion has been archived. No new comments can be posted.

Deep Learning Pioneer On the Next Generation of Hardware For Neural Networks

Comments Filter:
  • by Viol8 ( 599362 ) on Thursday August 27, 2015 @04:40AM (#50400809) Homepage

    Is that *in theory* you could understand why they come to a particular result, but in practice it could be potentially very hard with a large network for any person to get their head around the processes leading up to the output. This means that unless safety rules are changed we won't be seeing these things driving cars or flying aircraft anytime soon since the software needs to be verifiable and neural networks are not.

    • by Sneeka2 ( 782894 )

      Or, arguably, we need to change our definition of "verifiable"... For complex activities such as driving cars, we're reaching the limits of traditionally programmed computers. A human programmer cannot possibly think of every possible situation a car might encounter on the street and pre-program an appropriate response into the car. Neural networks and "artificial intelligence" doesn't have a pre-programmed response, but could come up with one based on patterns it knows. So it becomes more about giving the

      • by Viol8 ( 599362 )

        But then one day the neural net has a "senior moment" and drives the car off a cliff. And no one can figure out why. At least with a program you'll eventually figure out where the failure is. But I take your point about pre-programmed responses and you're right. I'm not really sure what the solution is - maybe use a neural network but have a normal program acting as a watchdog?

        • by orasio ( 188021 )

          You make a very interesting point.
          With automation, it's a lot easier for us to accept a given amount of understandable failure, than a much smaller amount of inexplicable failure. That might be a roadblock against some forms of automation.

          In any case, there's also economics, which do like statistics, and will make you choose the strategy that fails less, overall. For example, insurance companies might favour driving algorithms that crash less often vs ones that crash a bit more often, but for better known c

        • It's not impossible. More complex, but not impossible. You take the input from that caused that (car's black box), see which parent and child neurons fired indicating the undesired action and then check those neurons against the training set to see what aspects of the training set conditioned that behavior (aka, made those neurons respond to that image). You know now what caused that. If you want, you can then modify the network. Or you can take the inputs that caused the incorrect response, make some
        • by epine ( 68316 )

          But then one day the neural net has a "senior moment" and drives the car off a cliff.

          It's actually your geek pride that just plunged to astounding depths.

          Computers don't beat humans at chess by playing human chess better than humans. They beat humans by having a deeper view of the combinations and permutations and by making very few mistakes.

          A momentary "senior moment" in a self-driving car (I wish I could have rendered that in priapismic scare quotes, but Slashdot defeats me) would just as likely be foll

        • According to this article [nautil.us] neural networks do make mistakes, sometimes very big mistakes, like the ANN that confused a school bus with a football jersey. The root cause is not easy to determine, and hence the fix is not easy. 'Twould appear that the reasons neural networks fail is a hot topic in neural network research.
      • by Anonymous Coward

        A human programmer cannot possibly think of every possible situation a car might encounter on the street and pre-program an appropriate response into the car. ...
        it becomes more about giving the machine a robust basis to work on

        A serious question: is ensuring that a training data set is "robust" (i.e. all possible relevant scenarios are somehow in it) that much easier for humans to do than "thinking of every possible situation"?

        Perhaps it is, but that is not obvious to me. It seems like the two tasks are both very difficult.

        • by Sneeka2 ( 782894 ) on Thursday August 27, 2015 @07:39AM (#50401291)

          Sure, it's all extremely difficult. I'd think with neural networks you can use an evolutionary approach and eventually choose the program which has evolved and performed best over a series of X million of tests. The question "when is the program done" doesn't mean "when has the programmer thought of every last possibility" anymore, but rather "when are we satisfied enough with the statistics that we trust this program enough?"

      • by MrL0G1C ( 867445 )

        A human programmer cannot possibly think of every possible situation a car might encounter on the street and pre-program an appropriate response into the car.

        And they don't have too, all they have to do is make sure cars are substantially better than humans at not driving into things. What to drive into and what not to drive into in the event of an unavoidable accident will be determined by a simple scoring system that determines each possible route and picks the one with the best score. The scoring system

      • A human programmer cannot possibly think of every possible situation a car might encounter on the street and pre-program an appropriate response .

        I watched a talk by a Google self-driving car engineer; the funniest moment (and an example of your point about pre-programming) was the video showing the time when a Google car came across a woman driving a motorized wheelchair around chasing ducks in the middle of the street.

        • A human programmer cannot possibly think of every possible situation a car might encounter on the street and pre-program an appropriate response .

          I watched a talk by a Google self-driving car engineer; the funniest moment (and an example of your point about pre-programming) was the video showing the time when a Google car came across a woman driving a motorized wheelchair around chasing ducks in the middle of the street.

          Where I live I have seen motorized wheelchairs in the middle of the road, and also ducks crossing the road, although not this precise scenario. The point is that while it might seem unusual to see this on a Californian desert freeway, it isn't really that difficult to enumerate most possible hazards.

          If your automatic car crashes because of trans-dimensional anti-matter vampire bats or something, I hardly think anyone's going to worry about the programming missing out on that possibility.

    • by ziggystarsky ( 3586525 ) on Thursday August 27, 2015 @05:07AM (#50400861)

      Fortunately we can understand the processes within real people that lead to their actions. This is the reason that we safely let them drive cars, trains or fly planes.

      • by Viol8 ( 599362 )

        "Fortunately we can understand the processes within real people that lead to their actions. "

        Since when? Psychiatrics have been claiming that for years but I see little evidence for it beyond simple actions. Sometimes even the person themselves doesn't understand why they do something if it was subconsious.

        • by Anonymous Coward

          Whoosh!

        • "Fortunately we can understand the processes within real people that lead to their actions. "

          Since when? Psychiatrics have been claiming that for years but I see little evidence for it beyond simple actions. Sometimes even the person themselves doesn't understand why they do something if it was subconsious.

          But in this context, it's usually something along the lines of "I was texting on my phone while eating a burrito and slapping my kid's face in the seat behind me, which is why I failed to see the red light and hit the schoolbus without even braking".

          It's not really a question of subtle psychological explanations.

    • ... since the software needs to be verifiable ...

      The software does NOT have to be "verifiable". It just has to be thoroughly tested, and in practice, shown to be better than humans. It doesn't have to be perfect, it just has to be an improvement.

      Only trivial programs can be mathematically verified. Even for mission critical programs that make life and death decisions, very few can be proven correct. And even then, are you sure you trust the proof?

      There are techniques for making ANNs more reliable. One technique is "boosting": Independently train two

    • I certainly wouldn't want to be the one leading the charge to get this approved; but we currently let neural networks drive cars after a relatively pitiful 'black box' verification where we subject them to maybe 30 minutes, 45 at most, of approximately real-world stimuli and then evaluate their responses.

      This arrangement does end up with ~30,000 fatalities a year; but seems to enjoy broad support.
      • by Viol8 ( 599362 )

        And your alternative would be what? Not have allowed anyone to drive in the last 100 years?

        • Well, I think that the standards for driving tests could use some modification; but I was actually aiming at exactly the opposite point: There isn't any particular reason to believe that we need to, or will, demand that machines that control vehicles be submitted to some sort of profound understanding and formal verification, given that we accept black-box testing(and pretty shoddy testing at that) for human operators.

          The initial ,lobbying might be a fairly ghastly pain; but I see no reason why there wou
          • by orasio ( 188021 )

            That's just not true.
            Humans, specially urban dwellers, are known to have a certain set of capabilities, in general.
            Also, they are known to behave in a certain fashion, and to abide by certain rules.
            For example, a human with tendency to kill everyone in his path, would just not be able to apply for a drivers license, he would be in jail, dead, or something similar.
            That black box testing is only verifying very specific knowledge and ability. It doesn't do a great job at that, but its task is a lot easier than

            • So make it pass a drivers test? Maybe a couple dozen? Send it cross country a few times to prove it can handle an enormous variety of situations? Sounds like what they are already aiming to do.
              • by orasio ( 188021 )

                No.
                That's what the GP proposed.

                For a human, a skill test is OK, because we already know he's a human, and cities are built around humans. We can expect him to behave in a certain way, and we kind of know his possible range of abilities and limitations, even if not in a formal way.

                There's a reason why we require other things, like a minimum age, because being a responsible adult is precondition to the test.

                What they are doing right now is different. Still a black box test, but much more comprehensive. They a

            • I don't mean to suggest that "anything goes" will become the motto of software testing(at least not more than it is today); but unless all the neural networks deliver extraordinarily erratic behavior despite all effort to the contrary, I agree that it will be a difference of degree(we will test them a lot more rigorously than humans); but not of kind(humans also experience 'edge case' behavior, whether it be an aneurysm hitting them at the hardware level, a psychotic break, a murder-suicide, some ill-concei
      • Well, how hard can it be? [youtube.com]

    • by bigpat ( 158134 )

      Is that *in theory* you could understand why they come to a particular result, but in practice it could be potentially very hard with a large network for any person to get their head around the processes leading up to the output. This means that unless safety rules are changed we won't be seeing these things driving cars or flying aircraft anytime soon since the software needs to be verifiable and neural networks are not.

      I would agree that neural networks shouldn't be in a learning mode while they should be in a fixed operational mode, but once they are trained and the neural network is no longer being modified to fit the training set then a neural net is like any other algorithm and will output predictable results.

    • You wouldn't typically use a single neural network from input (LIDAR, video, gyros, accelerometers, etc) to output (steering and pedals)

      More typically you'd use different neural networks to tackle steps in the chain. One might identify the borders of the road and the median. Another might pick out cars. Another might project the position of the cars in the future. Each would be easier to test individually. You might also have a "supervisor" that looked for disagreement or inconsistencies between the sy

  • We've been through all this before ... spending 100s of millions just to re-realize that perceptrons are a dead end is a poor use of resources.

    • by lorinc ( 2470890 )

      I don't know. There is a part of me that says "yeah this is a bubble that's going to burst soon", and another part that says "wait, you've never seen that much improvement on such complex tasks before". Probably the future is in between, and parts of the deep conv nets are here to stay, while some others parts will rapidly be forgotten. But frankly, I don't know, which is a bit scary.

    • Why do you think perceptrons are a dead end and poor use of resources? A dead-end for what goal?
    • I've written some pretty crazy decision tree algorithms and some deep learning neural networks. There are use cases of neural networks, including perceptrons, that no traditional algorithm can solve.
  • Well, FPGAs being the choice for NN implementations is just as a reiteration as the whole deep learning and convnet field is - which is quite OK, since we have now computational tools and resources that we never had before, thus a lot of the NN/convnet/deeplearning theory suddenly became applicable. However, FPGA implementations of artificial/cellular neural networks and convnets dates back something like 20-25 years now, so it doesn't sit well to suggest it's a new direction. What's new however, is that wh

If all else fails, lower your standards.

Working...