Jeff Hawkins' Cortex Sim Platform Available 126
UnreasonableMan writes "Jeff Hawkins is best known for founding Palm Computing and Handspring, but for the last eighteen months he's been working on his third company, Numenta. In his 2005 book, On Intelligence, Hawkins laid out a theoretical framework describing how the neocortex processes sensory inputs and provides outputs back to the body. Numenta's goal is to build a software model of the human brain capable of face recognition, object identification, driving, and other tasks currently best undertaken by humans. For an overview see Hawkins' 2005 presentation at UC Berkeley. It includes a demonstration of an early version of the software that can recognize handwritten letters and distinguish between stick figure dogs and cats. White papers are available at Numenta's website. Numenta wisely decided to build a community of developers rather than trying to make everything proprietary. Yesterday they released the first version of their free development platform and the source code for their algorithms to anyone who wants to download it."
Re:Right... (Score:2, Interesting)
Re:Barrier to entry (Score:3, Interesting)
If you are interested in the field of AI with neural like computing, your best bet is to learn a huge amount of math. Really you can't understand anything without knowing at least 2nd year linear algebra. That's if you just want to basically understand what's going on. If you actually want to contribute, you're going to need a math degree.
This might sound like I'm discouraging you. I'm not. I just want you to understand what you're up against. You can definitely do some toy problems with neural network packages out there. You don't really need to understand what you are doing. But if you actually want to contribute, you don't really start here. You've got to do your basics and get your math chops up.
As for what you should read... Get some basic undergrad linear algebra books. A google search gave me this link:
http://joshua.smcvt.edu/linearalgebra/ [smcvt.edu]
Which looks like it will pretty much give you everything you need for the basics of neural networks (without the neural network part
Once you've got that down (and maybe you already do if you've got a math or CS degree), you can start reading some basic neural network material. The wikipedia entry for perceptron:
http://en.wikipedia.org/wiki/Perceptron [wikipedia.org]
seems pretty good and should give you a start on how neural networks (very, very simple ones) work. It gets surprisingly complicated from there
Things can get pretty harried math-wise once you start getting into learning algorithms. That's because you are basically trying to do a minimization with *a lot* of variables. It's not surprising that most of the innovative algorithms actually come from physics (well, in my day anyway... probably things have changed...) since when they are modeling stuff they tend to need to do the same thing. This is where you get into the scary math with multiple variate calculus and stuff... Way out of my league (I suck at math...)
Of course there are other forms of AI. But if you are trying to model the way the brain works, you need lots of math...
Starting companies to be heard? (Score:2, Interesting)
Re:This will cause problems (Score:2, Interesting)
Hmm.... (Score:2, Interesting)
1) All the research into cortical circuitry is done in non-humans. There are definite similarities between our cortex and that of a rat, but there are also drastic differences, if there weren't then rats would be able to talk, think, and reason like we do. (Yes lots of research is being done in non-human primates, but this work is EXTREMELY expensive and even non-human primates have different cortical circuitry then we do)
(Not only are the cortices of different species drastically different, scientists often chose regions of cortex that have no correlation in humans. Many neuroscientists are studying the Barrel Cortex. It is a region of cortex that is specifically designed to integrate the signals from the whiskers of a Rodent. Humans don't have whiskers and we also don't have Barrel Cortex. Anything learned about the circuitry of the Barrel Cortex will not necessarily correlate to human cortex.
2) Intra-population Circuitry research examines very small subsets of neurons that make up a bigger populations. When studying neurons in the visual cortex for example the best anyone can do is look at the firing of about 150 neurons. When you consider that there are over 10,000,000,000 (BILLION) neurons that make up the human brain a small set of 150 neurons is almost nothing. We don't have sufficient technology to examine what each neuron in a specific population is doing.
3) Inter-population circuitry research only looks at what populations are connected to each other. Yes we know what type of neurons project from one area of the brain to the next, however, this only gives a very rough schematic of the circuitry. The circuitry of both the cerebellum and the hippocampus have been described beautifully (they have both been known for well over 50 years). However once we no this circuitry it yields no light on how the circuitry actually accomplishes its task.
4) Failure to integrate both intra and inter population circuitry. I have yet to read a paper that does a good job of integrating these two studies. Most neuroscientist pick one emphasis and stick with it. In order to understand exactly what the cortex is doing you must integrate all levels of research into your studies.
5) Study of the cortex is insufficient. The cortex projects to many regions of the brain whose functions are still unknown. These connections to these brain regions might not appear necessary but if they really weren't necessary why are they there? Back in the day people who had really bad seizures would have what is called a "Corpus Callosomy" This is the cutting of the fibers that connect the two hemispheres of the brain. At first the procedure was called a success. However, after further investigation it turned out that the people on whom this operation was performed had drastic problems. (Example, if a person was holding an object in their left hand (the sensory fibers project from the left hand to the right side of the brain) and if they weren't allowed to see the object, upon request of the examiner of what the person was holding they would respond there is nothing in their hand. ) This example is only to illustrate that upon initial examination many regions of the brain appear to have no function as lesioning these structures has no aversive effects, this is what many people thought about the corpus colosum, however upon further examination this proved untrue. Before we can understand how the cortex fully functions we must understand how the entire brain works with it.
Sorry to be a nay sayer but I have serious doubts whenever someone claims to have figured out how the cortex works.
almost... (Score:3, Interesting)
http://en.wikipedia.org/wiki/Baum-Welch_algorithm [wikipedia.org] http://en.wikipedia.org/wiki/Viterbi_algorithm [wikipedia.org]
The first is an alogorithm which utilizes forward and back-tracking "to find the unknown parameters of a hidden Markov model." The second is a similar algorithm used for learning 'known' causes (for reference).
I work in computational linguistics and the time an algorithm takes to run and the amount of memory it requires are serious limitations. That's why ad-hoc systems are so common.
Cortex Sim == Bullsh*t (Score:5, Interesting)
Here is what many people in machine learning and computer vision think about Hawkins stuff:
- it's way, way behind what other people in vision and machine learning are doing. Several teams have biologically-inspired vision systems that can ACTUALLY LEARN TO RECOGNIZE 3D OBJECTS. Hawkins merely has a small hack that can recognize stick figures on 8x8 pixel binary images. Neural net people were doing much more impressive stuff 15 years ago.
- Hawkins's ideas on how the brain learns are not new at all. Many scientists in machine learning, computer vision, and computational neuroscience have had general ideas similar to the ones described in Hawkins's book for a very long time. But scientists never talk about philosophical ideas without actual scientific evidence to support them. So instead of writing popular book with half-baked conceptual ideas, they actually build theories and algorithms, they build models, and they apply them to real data to see how they work. Then they write a scientific paper about the results, but they rarely talk about the philosophy behind the results.
It's not unusual for someone to come up with an idea they think is brand new and will revolutionize the world. Then they try to turn those conceptual ideas into real science and practical technologies, and quickly realize that it's very hard (the things they thought of as mere details often turn out to be huge conceptual obstacles). Then, they realize that many people had the same ideas before, but encountered the same problems when trying to reduce them to practice (which is why you didn't hear about their/your ideas before). These people eventually scaled back their ambitions and started working on ideas that were considerably less revolutionary, but considerably more likely to result in research grants, scientific publications, VC funding, or revenues.
Most people go through that "naive" phase (thinking they will revolutionize science) while they are grad students. A few of them become successful scientists. A tiny number of them actually manage to revolutionize science or create new trends. Hawkins quit grad school and never had a chance to go through that phase. Now that he is rich and famous, the only way he will understand the limits of his idea is by wasting lots of money (since he obviously doesn't care about such things as "peer review"). In fact, many reputable AI scientists have made wild claims about the future success of their latest new idea (Newell/Simon with the "general theorem prover", Rosenblatt with the "Perceptron", Papert who thought in the 50's that vision would be solved over the summer, Minsky with is "Society of Minds", etc......).
No scientist will tell Hawkins all this, because it would serve no purpose (other than pissing him off). And there is a tiny (but non-zero) probability that his stuff will actually advance the field.
- Anonymous Scientist
Re:This will cause problems (Score:3, Interesting)
Re:Right... (Score:5, Interesting)
Re:Right... (Score:3, Interesting)
Most "grand-scale theories of brain operation", in fact, fail to make claims that can be tested, at least not in the foreseeable future. They predict the large-scale algorithms by which the brain operates. They do not make any claims as to the behavior of any individual neurons, and this is the data we have to work with. Moreover, these theories generally fail to provide any explanation for existing data, such as the diversity in neuronal phenotypes, the connectivity architecture, functional segregation, the wealth of neurotransmitters, laminar structure and why the details of this structure varies across the neocortex, differences in histochemical labelling, and so on and so forth. In short, these theories tend to be computer science, and not neuroscience. They might represent major progress in the question, "how do we make a machine that can solve a difficult computational problem?" but they have very little significance in answering the question, "what are the principles that underlie neural performance?"
Not really. The ability to stop learning is a crucial element of learning. In the existing computational literature, this is related to the problem of overfitting: there comes a point where additional learning is dominated by attempts to explain noise in the data, and can actually lead to degraded performance. A classic example of "frozen learning" in living animals include the zebra finch male (who learns one song only in his life, which remains unchanged past adolescence). Anecedotally, you can also think of human accents in speech; most people never lose the accents they develop in childhood, no matter how hard they might try. Of course, both of the examples illustrate that it is not a matter of stopping learning; learning can in fact occur, but much slower and under much more extreme conditions.
From the perspective of neuroscience (and in fact, from the unsupervised learning perspective in general), given that there are lots of models that can learn and stop learning, the much more relevant question is: how can the system switch between these two states?
Is the problem a supernatural one? Of course not. It is a very tough problem. The issue is not, at this point, a lack of theory.