Augmenting Data Beats Better Algorithms 179
eldavojohn writes "A teacher is offering empirical evidence that when you're mining data, augmenting data is better than a better algorithm. He explains that he had teams in his class enter the Netflix challenge, and two teams went two different ways. One team used a better algorithm while the other harvested augmenting data on movies from the Internet Movie Database. And this team, which used a simpler algorithm, did much better — nearly as well as the best algorithm on the boards for the $1 million challenge. The teacher relates this back to Google's page ranking algorithm and presents a pretty convincing argument. What do you think? Will more data usually perform better than a better algorithm?"
Heuristics?? (Score:1, Insightful)
Depends on the Problem (Score:5, Insightful)
I worked for a while on the Netflix prize, and if there's one thing I learned it's that a recommender system almost always gets better the more data you put into it, so I'm not sure if this one case study is enough to apply the idea to all algorithms.
Though, in a way, this is sort of a "duh" result - data mining relies on lots of good data, and the more there is generally the better a fit you can make with your algorithm.
I think better is subjective... (Score:3, Insightful)
Here, better means different things to different people. The more data you have gives you a larger set of people, and probably a more accurate definition of better for a larger set of people. I'm not sure you can really compare the two.
attn computer scientists: stop renaming stuff (Score:0, Insightful)
"page rank algorithm" is just an eigenvalue calculation.
i know you computer scientists like playing mathematician, but there's a reason why you're the butt of mathematicians jokes. because you guys are nothing more than glorified engineers.
Um, Yes? (Score:5, Insightful)
I think we need much, much more rigorous definitions of "more data" and "better algorithm" in order to discuss this in any meaningful way.
Is it just me that is surprised here? (Score:2, Insightful)
I read the article in question here and can say that I'm surprised that this is even a question.
"Better data" not "more data" (Score:1, Insightful)
Look at the application. Netflix alone VS Netflix+IMDB. The second not only has more data, but it has "better" data in terms of having more human decision inputs applied to it thus weighting the data to produce more correct results.
But if you looked at it this way Netflix 2007 data VS Netflix 2006-2007 data I don't think you would find a significant difference in results. This is the same "type" of data, only more of it, where as the former is a practical example of data fusion.
Char-Lez
More vs Better (Score:4, Insightful)
All things being equal... (Score:4, Insightful)
And the teams were identically talented? In my CS classes, I could have hand-picked teams that could make O(2^n) algorithms run quickly and others that could make O(1) take hours.
Five stars (Score:5, Insightful)
How to translate the entire experience of watching a movie into a lone number is a separate issue.
For the Sake of Discussion (Score:4, Insightful)
You could also make an elaborate algorithm that uses user age, sex & location
Honestly, I could provide endless ideas for 'better algorithms' although I don't think any of them would even come close to matching what I could do with a database like IMDB. Hell, think of the Bayesian token analysis you could do on the reviews and message boards alone!
This is assuming... (Score:3, Insightful)
Thus, an algorithm-driven design should always out-perform data-driven designs when knowledge of the specific is substantially less important than knowledge of the generic. Data-driven designs should always out-perform algorithm-driven design when the reverse is true. A blend of the two designs (in order to isolate and identify the nature of the data) should outperform pure implementations following either design when you want to know a lot about both.
The key to programming is not to have one "perfect" methodology but to have a wide range at your disposal.
For those who prefer mantras, have the serenity to accept the invariants aren't going to change, the courage to recognize the methodology will, and the wisdom to apply the difference.
Recommendations Systems and subjectivity (Score:4, Insightful)
The article is informative and generally correct, however, having done this sort of stuff on a few projects, I have some problems with the netflix data.
First, the data is bogus. The preferences are "aggregates" of rental behaviors, whole families are represented by single accounts. Little 16 year old Tod, likes different movies than his 40 year old dad. Not to mention his toddler sibling and mother. A single account may have Winnie the Pooh and Kill Bill. Obviously, you can't say that people who like Kill Bill tend to like Winnie the Pooh. (Unless of course there is a strange human behavioral factor being exposed by this, it could be that parents of young children want the thrill of vicarious killing, but I digress)
The IMDB information about genre is interesting as it is possibly a good way to separate some of the aggregation.
Recommendation systems tend to like a lot of data, but not what you think. People will say, if you need more data, why just have 1-5 and not 1-10? Well, that really isn't much more added data it is just greater granularity of the same data. Think of it like "color depth" vs "resolution" on a video monitor.
My last point about recommendations is that people have moods are are not as predictable as we may wish. On an aggregate basis, a group of people is very predictable. A single person setting his/her preferences one night may have had a good day and a glass of wine and numbers are higher. The next day could have had a crappy day and had to deal with it sober, the numbers are different.
You can't make a system that will accurately predict responses of a single specific individual at an arbitrary time. Let alone based on an aggregated data set. That's why I haven't put much stock in the Netflix prize. Maybe someone will win it, but I have my doubts. A million dollars is a lot of money, but there are enough vagaries in what qualifies as a success to make it a lottery or a sham.
That being said, the data is fun to work with!!
Re:Depends on the Problem (Score:3, Insightful)
One Trivial Result, One Big Assumption (Score:4, Insightful)
The second thing about the claim seems to be that there is always additional information actually available. The comment is made that academia and business don't seem to appreciate the value of augmenting the data. That is false. In business additional data is often just not available (physically or for cost reasons). Consequently, improving your algorithms is all you can do. Similarly in academia (say a computer science department) the assumption is often that you are trying to improve your algorithms while assuming that you have all the data available.
Re:attn computer scientists: stop renaming stuff (Score:4, Insightful)
And nonlinear dimensionality reduction is just nonconvex trace optimization coupled with kernel principal component analysis (fine, call it "singular value decomposition") using Mercer's theorem to map the resulting dot product through a kernel function (usually represented as a Hermitian positive semidefinite Gram matrix), yielding an inner product space of higher (possibly infinite) dimensionality in which the original problem is linearly separable.
Now take this description and write an algorithm that performs it efficiently. And you use PageRank as an example, so let's call "efficient" "performs as well as Google on the entire web's worth of data".
If you can't do this, perhaps you should reconsider your view of computer scientists. There's no reason whatsoever to play up the boundaries between two very related fields. Arbitrary boundaries in knowledge are already bad enough; they need to be knocked down, not reinforced.
Re:Depends on the Problem (Score:3, Insightful)
Re:Depends on the Problem (Score:4, Insightful)
Anyways, if you're paranoid about data on you being used - there's a less well-known field of recommender systems which uses implicit data gathering which can be easily setup on any site. For example, it might say that because you clicked on product X many times today, you're probably in want of it and they can use that data. Of course, implicit data gathering is more faulty than explicit data gathering, but it just goes to show that if you spend time on the internet, websites can always use your data for their own means.
Re:attn computer scientists: stop renaming stuff (Score:2, Insightful)
I think we all need each other, folks
Re:For the Sake of Discussion (Score:1, Insightful)
You used an algorithm to sort out the "data" that you are using. The act of weighting and comparing the different properties of the data you have IS an algorithm. Period.
No algorithm can operate without data, and data is useless without an algorithm to DO something with it.
More data will give a better statistical reading of the data, and help eliminate 'bad data points', so in many cases more data can be better... but that depends on the algorithm used, the quantity of data, etc.
I would suggest the original person simply take a couple courses in computer science. There they will see classic examples of situations in which two algorithms are compared, and how in some situations one will excel in speed with limited data, and others will excel in speed with prolific data.
Re:attn computer scientists: stop renaming stuff (Score:1, Insightful)
If they're not engineers, then what are they?
Does nobody know Shannon anymore? (Score:2, Insightful)
Re:Depends on the Problem (Score:5, Insightful)
Say we were looking at 100 million fields, suddenly we have 50% of the possible data, and our unknown field is the same size as the known field. Much more likely to get a result then.
Re:Heuristics?? (Score:3, Insightful)
Since we are obviously talking about the "goodness" of the results produced by the algorithm, I think it's pretty safe to assume that the broader definition of "algorithm" is being used.
Re:Depends on the Problem (Score:3, Insightful)
The Netflick data shouldn't be regarded as representative of anything. That data set has shockingly low dimensionality. So far as I know, they make no attempt to differentiate what kind of enjoyment the viewer obtained from the movie, or even determine whether the movie was viewed in a solo or group situation. They don't ask "who was your favorite character / actor / actress". Nor do they follow up on aging opinions: "which of these two movies would you presently rate higher?" so the corroboration factor is zero.
I'm pretty fussy about the movies I rent. The worst movie I've endured this year was "Night at the Museum", which was loaned to me. I managed to get through it at the 1.4x speed setting on a slow evening.
As bad as it was, I wouldn't rate it less than a 3. I'd like to save 1 and 2 for outright incompetence. Was "Museum" a manipulative piece of crap? Absolutely. I'd tick that box in a heartbeat. Did I feel personally soiled by Genghis' emotional discharge? I've been showering for days. From what I've read about Genghis, the only way to get him to discharge would have been to lock him in a room with Sacagawea.
If you give "Museum" a three for competence squandered, what do you give Soderberg's "Solaris"? I'm glad I watched it. It was interesting to see what they did with the sets, and to find out whether anything ever happens (spoiler: no). I still recall the intensity of the black woman, though unfortunately her fine acting served no real purpose. While I was happy to rent it, it also earned a place on my list of movies least likely to rent twice.
Really, Netflick deserves five gold stars for having created the least augmented opinion stream since baby spit out his brussel sprouts.
Re:Heuristics?? (Score:3, Insightful)
Not always. Approximation algorithms are often ranked on their accuracy. Online algorithms are often ranked on something called the competitive ratio. Randomized algorithms are usually ranked on their resource uses, but all three of these needn't be optimal (in the context of an optimization problem) -- or produce correct results (in the context of a decision problem).
Algorithms must have the same correct results by definition.
[citation needed]