Stanford 'Intro To AI' Course Offered Free Online 148
An anonymous reader writes "IEEE Spectrum reports that Stanford's CS221 course 'Introduction to Artificial Intelligence' will be offered online for free. Anyone can sign up and take the course, along with several hundred Stanford undergrads. The instructors are Sebastian Thrun, known for his self-driving cars, and Peter Norvig, director of research at Google. Online students will actually have to do all the same work as the Stanford students. There will be at least 10 hours per week of studying, along with weekly graded homework assignments and midterm and final exams. The instructors, who will be available to answer questions, will issue a certificate for those who complete the course, along with a final grade that can be compared to the grades of the Stanford students. The course, which will last 10 weeks, starts on October 2nd, and online enrollment is now open."
When asked how they would deal with ten thousand students, Professor Thrun replied:
"We will use something akin to Google Moderator to make sure Peter and I answer the most pressing questions. Our hypothesis is that even in a class of 10,000, there will only be a fixed number of really interesting questions (like 15 per week). There exist tools to find them."
Only 15 good questions per 10000 students (Score:4, Interesting)
Peter Norvig should be a good teacher (Score:5, Interesting)
But about 20 years ago when I was really into Common Lisp, I read his book "Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp". It was one of the best books I had ever read. Lots of fantastic examples and code.
Makes me think I should get his "modern approach" book. Maybe think about the online course.
Like Music, News and other dinosaurs. (Score:3, Interesting)
Re:this is great! (Score:2, Interesting)
What's your deal?
http://slashdot.org/comments.pl?sid=2359264&cid=36951064
Raises questions about university costs (Score:4, Interesting)
If the content of this class is exactly the same as the "real" version, and at the end you are evaluated on the grading curve right alongside "real" students... then you have to question why the cost of "really" being a Stanford student is $55,385 per year [stanford.edu], while the cost of receiving the same product without the formal diploma is $0.
How much of the expense of modern university education today is actually tied to the core product, and how much is simple sociology? That is, only a certain percentage of society can be in the "elite" ranks by definition... and so elite institutions must price themselves accordingly to maintain the appropriate exclusion.
Re:Not what you think (Score:3, Interesting)
The field of AI is no longer focused on creating humans brains as far as I've learned from my studies. They did dream big back then when the field first came to be, but the complexity of the problem became apparent. It's simply, currently, not possible.
There is planning, search and logic AI, which finds the best possible plans for different problems, and is often used in manufacturing. Such as designing computer chips, or for instructions to robots or cranes that builds, sorts or package. AI is capable of approximating solutions to problems that cannot be done through algorithmic means; as such AI often deal with problems in NP.
Another field is game AI, which I know most about. There's a plethora of sub-fields here. The traditional game AI dealt with solving games, and has influenced many games such as chess. (AI hasn't solved chess, but found many end games that humans did not know, and found solutions to end games that humans have theorized about for over a hundred years) Modern game AI concerns itself with AI for video games. The goals are many. Fun and challenging opponents. Autonomous opponents that learn during play and gain new knowledge. Procedural content generation in respect to the player and much more. Not that much has been done in the industry, but in the field there's a lot of focus on machine learning techniques that learn the games themselves based on some criteria set by the creators.
I haven't read anything about AI that attempts to be human-like in the sense they pursued earlier lately. I've read several times however, that the Turing test is faulty and should be ignored; it serves no purpose in the field. The new purpose is to create machines that can do some task, and do it well. If its deemed intelligent by humans is of no consequence. If it does a job better than a human, then it is an advance. That it is worse than optimal is a strength, because as I said, the problems often dealt with are not solvable optimally. (At least not until quantum computing, albeit I know nothing about how that works; it seems to be another new dream, so if its like the dream of AI in the beginning, it will probably not solve all, but just make advances)