This is a follow-up question to the March Science Chat. In the chat, bkoep outlined how a foldit solution is selected for further testing in Rosetta and the wet lab:
'There are two different pathways for a Foldit design to reach the wet lab. The first is that it is high-scoring. The cream of the crop is automatically submitted to Rosetta@home, mainly as a way to benchmark our progress from week to week. We don't even really inspect these designs. The second pathway involves manual inspection. Here we're looking for designs that look like promising folders to us. We start with the shared solutions. Then we move on to the bulk solutions, which are clustered (to remove identical or near-identical solutions), and ranked by score. We go down the list of clustered solutions (which naturally includes the top-ranking solutions) and inspect anything that looks "plausible"—usually around 100 models.' –bkoep
My question is, do you base the manual selection on objective measures (such as loop quality, realistic sidechain positions, etc.), or on expert judgement by the scientists, or some of both?
Also, can you give us a ballpark estimate for how many of the ~100 manually inspected models you typically send to Rosetta? Do you feel you are testing enough lower-scoring designs to form a good estimate of foldit's false negative rate (how often the score function misses a good design)?