Deep Learning Pioneer On the Next Generation of Hardware For Neural Networks 45
An anonymous reader writes: While many recognize Yann LeCun as the father of convolutional neural networks, the momentum of which has ignited artificial intelligence at companies like Google, Facebook and beyond, LeCun has not been strictly rooted in algorithms. Like others who have developed completely new approaches to computing, he has an extensive background in hardware, specifically chip design and this recognition of specialization of hardware, movement of data around complex problems, and ultimately core performance, has proven handy. He talks in depth this week about why FPGAs are coming onto the scene as companies like Google and Facebook seek a move away from "proprietary hardware" and look to "programmable devices" to do things like, oh, say, pick out a single face of one's choosing from an 800,000 strong population in under five seconds.
The problem with neural networks (Score:3)
Is that *in theory* you could understand why they come to a particular result, but in practice it could be potentially very hard with a large network for any person to get their head around the processes leading up to the output. This means that unless safety rules are changed we won't be seeing these things driving cars or flying aircraft anytime soon since the software needs to be verifiable and neural networks are not.
Re: (Score:3)
Or, arguably, we need to change our definition of "verifiable"... For complex activities such as driving cars, we're reaching the limits of traditionally programmed computers. A human programmer cannot possibly think of every possible situation a car might encounter on the street and pre-program an appropriate response into the car. Neural networks and "artificial intelligence" doesn't have a pre-programmed response, but could come up with one based on patterns it knows. So it becomes more about giving the
Re: (Score:2)
But then one day the neural net has a "senior moment" and drives the car off a cliff. And no one can figure out why. At least with a program you'll eventually figure out where the failure is. But I take your point about pre-programmed responses and you're right. I'm not really sure what the solution is - maybe use a neural network but have a normal program acting as a watchdog?
Re: (Score:3)
You make a very interesting point.
With automation, it's a lot easier for us to accept a given amount of understandable failure, than a much smaller amount of inexplicable failure. That might be a roadblock against some forms of automation.
In any case, there's also economics, which do like statistics, and will make you choose the strategy that fails less, overall. For example, insurance companies might favour driving algorithms that crash less often vs ones that crash a bit more often, but for better known c
Re: (Score:1)
Re: (Score:2)
It's actually your geek pride that just plunged to astounding depths.
Computers don't beat humans at chess by playing human chess better than humans. They beat humans by having a deeper view of the combinations and permutations and by making very few mistakes.
A momentary "senior moment" in a self-driving car (I wish I could have rendered that in priapismic scare quotes, but Slashdot defeats me) would just as likely be foll
Re: (Score:2)
Re: (Score:1)
A human programmer cannot possibly think of every possible situation a car might encounter on the street and pre-program an appropriate response into the car. ...
it becomes more about giving the machine a robust basis to work on
A serious question: is ensuring that a training data set is "robust" (i.e. all possible relevant scenarios are somehow in it) that much easier for humans to do than "thinking of every possible situation"?
Perhaps it is, but that is not obvious to me. It seems like the two tasks are both very difficult.
Re:The problem with neural networks (Score:4, Informative)
Sure, it's all extremely difficult. I'd think with neural networks you can use an evolutionary approach and eventually choose the program which has evolved and performed best over a series of X million of tests. The question "when is the program done" doesn't mean "when has the programmer thought of every last possibility" anymore, but rather "when are we satisfied enough with the statistics that we trust this program enough?"
Re: (Score:2)
And they don't have too, all they have to do is make sure cars are substantially better than humans at not driving into things. What to drive into and what not to drive into in the event of an unavoidable accident will be determined by a simple scoring system that determines each possible route and picks the one with the best score. The scoring system
Re: (Score:2)
A human programmer cannot possibly think of every possible situation a car might encounter on the street and pre-program an appropriate response .
I watched a talk by a Google self-driving car engineer; the funniest moment (and an example of your point about pre-programming) was the video showing the time when a Google car came across a woman driving a motorized wheelchair around chasing ducks in the middle of the street.
Re: (Score:1)
A human programmer cannot possibly think of every possible situation a car might encounter on the street and pre-program an appropriate response .
I watched a talk by a Google self-driving car engineer; the funniest moment (and an example of your point about pre-programming) was the video showing the time when a Google car came across a woman driving a motorized wheelchair around chasing ducks in the middle of the street.
Where I live I have seen motorized wheelchairs in the middle of the road, and also ducks crossing the road, although not this precise scenario. The point is that while it might seem unusual to see this on a Californian desert freeway, it isn't really that difficult to enumerate most possible hazards.
If your automatic car crashes because of trans-dimensional anti-matter vampire bats or something, I hardly think anyone's going to worry about the programming missing out on that possibility.
Re:The problem with neural networks (Score:4, Insightful)
Fortunately we can understand the processes within real people that lead to their actions. This is the reason that we safely let them drive cars, trains or fly planes.
Re: (Score:2)
"Fortunately we can understand the processes within real people that lead to their actions. "
Since when? Psychiatrics have been claiming that for years but I see little evidence for it beyond simple actions. Sometimes even the person themselves doesn't understand why they do something if it was subconsious.
Re: (Score:1)
Whoosh!
Re: (Score:1)
"Fortunately we can understand the processes within real people that lead to their actions. "
Since when? Psychiatrics have been claiming that for years but I see little evidence for it beyond simple actions. Sometimes even the person themselves doesn't understand why they do something if it was subconsious.
But in this context, it's usually something along the lines of "I was texting on my phone while eating a burrito and slapping my kid's face in the seat behind me, which is why I failed to see the red light and hit the schoolbus without even braking".
It's not really a question of subtle psychological explanations.
Re: (Score:1)
You need to have your sarcasm detector checked.
Re: (Score:2)
... since the software needs to be verifiable ...
The software does NOT have to be "verifiable". It just has to be thoroughly tested, and in practice, shown to be better than humans. It doesn't have to be perfect, it just has to be an improvement.
Only trivial programs can be mathematically verified. Even for mission critical programs that make life and death decisions, very few can be proven correct. And even then, are you sure you trust the proof?
There are techniques for making ANNs more reliable. One technique is "boosting": Independently train two
Re: (Score:2)
The software does NOT have to be "verifiable"
Says who?
Says everyone who uses current software, which is everyone. Almost none of it has been formally verified. Why should NNs be held to a different standard?
Re: (Score:1)
The software does NOT have to be "verifiable"
Says who?
Says everyone who uses current software, which is everyone. Almost none of it has been formally verified. Why should NNs be held to a different standard?
I think you're using a different definition of "verifiable" than the rest of us.
What we mean is that you can reproduce the results given a certain set of inputs, which you most certainly can do with most software.
You seem to be thinking of some form of pre-approval testing for 100% accuracy, which is a different question.
Re: (Score:2)
This arrangement does end up with ~30,000 fatalities a year; but seems to enjoy broad support.
Re: (Score:2)
And your alternative would be what? Not have allowed anyone to drive in the last 100 years?
Re: (Score:3)
The initial
Re: (Score:2)
That's just not true.
Humans, specially urban dwellers, are known to have a certain set of capabilities, in general.
Also, they are known to behave in a certain fashion, and to abide by certain rules.
For example, a human with tendency to kill everyone in his path, would just not be able to apply for a drivers license, he would be in jail, dead, or something similar.
That black box testing is only verifying very specific knowledge and ability. It doesn't do a great job at that, but its task is a lot easier than
Re: (Score:2)
Re: (Score:2)
No.
That's what the GP proposed.
For a human, a skill test is OK, because we already know he's a human, and cities are built around humans. We can expect him to behave in a certain way, and we kind of know his possible range of abilities and limitations, even if not in a formal way.
There's a reason why we require other things, like a minimum age, because being a responsible adult is precondition to the test.
What they are doing right now is different. Still a black box test, but much more comprehensive. They a
Re: (Score:2)
Re: (Score:2)
Well, how hard can it be? [youtube.com]
Re: (Score:2)
Is that *in theory* you could understand why they come to a particular result, but in practice it could be potentially very hard with a large network for any person to get their head around the processes leading up to the output. This means that unless safety rules are changed we won't be seeing these things driving cars or flying aircraft anytime soon since the software needs to be verifiable and neural networks are not.
I would agree that neural networks shouldn't be in a learning mode while they should be in a fixed operational mode, but once they are trained and the neural network is no longer being modified to fit the training set then a neural net is like any other algorithm and will output predictable results.
Re: (Score:2)
You wouldn't typically use a single neural network from input (LIDAR, video, gyros, accelerometers, etc) to output (steering and pedals)
More typically you'd use different neural networks to tackle steps in the chain. One might identify the borders of the road and the median. Another might pick out cars. Another might project the position of the cars in the future. Each would be easier to test individually. You might also have a "supervisor" that looked for disagreement or inconsistencies between the sy
Can this bubble burst already? (Score:1)
We've been through all this before ... spending 100s of millions just to re-realize that perceptrons are a dead end is a poor use of resources.
Re: (Score:3)
I don't know. There is a part of me that says "yeah this is a bubble that's going to burst soon", and another part that says "wait, you've never seen that much improvement on such complex tasks before". Probably the future is in between, and parts of the deep conv nets are here to stay, while some others parts will rapidly be forgotten. But frankly, I don't know, which is a bit scary.
Re: (Score:2)
Re: (Score:2)
future? (Score:2)