Wednesday, January 31, 2007

science research is not proof

Two days ago, we had a very nice discussion with Pat Langley.
Pat is more a cognitive scientist than a typical computer scientist, also very talkative:)

Basically, I question him why he recommend one paper saying that it took longer time for human beings to reason about difficult puzzles. The authors use a forward chaining scheme to build their model. Based on the similar performance of the model and human beings, it concludes that it took longer time for human beings to solve difficult puzzles.

OK. That paper is a little boring to me as the result is totally not surprising, or even too "obvious". Also, I really doubt human beings reasons like forward chaining or backward chaining. At the first sight, it seems the result of the paper is totally not convincing to me though the result is obvious.

So I asked him about the validality of this experiment set up. Then, I got his question, except this way, what can you do? Yep, I just post questions but forgot to think out of the box. What would I do if I am the researcher? You propose a model, then what you can do is to find "sufficient" evidence to support your claim. However, in most disciplines, this cannot be proved as mathematics. And even it's proved in someway, it might not work in reality. That's a common case in data mining, also in planning. The disadvantage of deduction is you have to trust your premise 100%, if one of your assumption is wrong, you mess up.

After reckon over that problem for a while, I have to sadly admit that's the only possible way or proper way to justify a new claim. So the problem is how to find "enough" evidence?

It seems that our science research is very weak. Or maybe that's exactly the process of doing research. You propose a model to solve a problem, explain some phenomenon. Then people use one counter example to disprove your model. (Disproof is always much easier than proof, but this seems not the case for refutation in logic proof). Then, a new model or theory is proposed.

You'll find that almost all science or engineering research are repeating the same cycle.

When going back to machine learning experiment, I think I've already put too much belief to 10-fold cross validation. But actually, most of tasks have no correct evaluation method. Like information retrieval, data analysis. these tasks requires a reasonable good answer but not an optimal solution.

Like the recent paper I coauthored with Payam for ICML, we found that two dramatically different feature selection evaluation methods turn out to be almost the same for comparing feature selection methods. There's actually no proof. But I believe it's an interesting research.

Science Research is pragmatic.

No comments: