i would like to know what the current state of the foldit project is. i am not talking about the game but the underlying research project about how humans compared to algorithms solve the problems.
any papers or drafts available?
Are results compared on LiveBench ? what is the process for evaluation on these puzzles ?
Is there anythign you can show for statistics of how user models compare to random models, or those created by other means?
I think it would mean a lot to users to see whether or not the structure is even close to the native structure, or how much native content is present in the models.
There's an intriguing entry today saying that Rosetta@home plus some human intuition did generally better on CASP8 targets than the automated server Robetta. Is the human intuition bit a reference to FoldIt? Exciting if true but its not clear that that's what it means.
I, too, would like to know if 'Fold It!' has helped or advanced research in any considerable measure. If it has, even a small bit, then fine. It'd be nice if folders weren't kept in the dark about these issues.
I'm just curious what's going on, since they're still posting puzzles, but there hasn't been any News here or in the game software for a couple of weeks.
I'm afraid we can't deliver these sorts of results as soon as you (and we) would like. It takes a lot of time for meaningful evaluations to be made. Regardless of that, you can find encouraging tidbits of news in several places on the forums and in the news. Rest assured that we give you news on what we know as soon as we know it.
On that note, we will be starting a blog later this week or next as well.
Thank you, looking forward to the blog.
I hope nobody takes offense to this, but sometimes I wonder if I've done something worthwhile, or just made it to first place in the special olympics of protein folding.