Scientist Chat Thursday Feb. 23rd at 21:00 GMT

Aaron Chevalier and I (beta_helix) will be in global chat on February 23rd starting at 1pm PDT (21:00 GMT)

Aaron will talk about the recent results from the Flu Puzzles: http://fold.it/portal/node/990312
and I will discuss current/future CASP ROLL puzzles.

This Scientist Chat will focus on Foldit and Science (so we will not be discussing clients/bugs/feedbacks).

We hope you can make it!

Developer Preview UPDATE: We plan to post a new release to the Developer Preview (adding in a fix that was missed when removing the sliders) later today or tomorrow.

(Mon, 02/20/2012 - 20:53  |  5 comments)
auntdeen's picture
User offline. Last seen 7 weeks 2 days ago. Offline
Joined: 04/19/2009
A question for the chat re: CASP submissions

It would be great to let the community (and those of us who do team submissions) know a little more about the process you use for deciding on your submissions. We know that diversity is good (covering as many bases as possible) - but how do you as developers define diversity, and what type of tools do you use to distinguish it?

If, for instance, a top player mentions in global that they are working on a certain model for a certain puzzle, should another player abandon that model figuring that the top player will have the "best" solution, or is it worth trying to come up with a second "best"? Will that "second best" be able to have the diversity you are looking for if it's based on the same model?

Joined: 05/09/2008
Groups: None
auntdeen,

That is exactly what I want to discuss at the Scientist Chat.

The diversity protocol used is to look at the top-scoring Foldit prediction (across all 3 CASP ROLL puzzles for each target) and submit that model (as long as it is not identical to one of the server models... I will expand on that part in the chat).

Then we look at the next top-scoring Foldit prediction that is completely different from the previous top-scoring model (ie not based on the same server model, or if it was based on the same model initially... it has been dramatically modified)

and so on until we get 5 topologically different top-scoring Foldit solutions.

For Example:
Top-scoring Foldit prediction was based off of Baker2 (the second RosettaServer model).
Next top-scoring Foldit prediction (that wasn't based off of Baker2) was the top-scoring solution from the "Top CASP ROLL Predictions" puzzle that was based off of Quark1 (the first Quark Server prediction).
Next top-scoring Foldit prediction (that wasn't based off Baker2 or Quark1) was based off of Baker4.
Next top-scoring Foldit prediction (that wasn't based off Baker2, Quark1, or Baker4) was based off of Quark2.
And the next top-scoring Foldit prediction was actually also initially based on Quark2, but was so heavily modified that it looks nothing like either the second Quark Server model or the 4th submitted Foldit model above.

The most important thing to realize about the Free Modeling Targets is that they are very difficult and it is rare for any CASP submission to actually get it right (server or human).
Just go to the CASP9 page:
http://predictioncenter.org/casp9/results.cgi?view=tb-sel
and click on the tiny box on the top right that says FM, then click Show Results.
You'll see that the highest GDT_TS score is 71.83 and they quickly go down in the 50s (a GDT score of 100 would be perfect).
Go back to that initial page and select TBM (Template Based Modeling), then click Show Results. You'll see how over half of those GDT scores are above 90.

This means that the most likely all 15 server models that you are working on are completely wrong! As depressing as that might seem, it is actually very exciting because if you are able to come up with even 1 Free Modeling Foldit prediction that is closer to the native than all the other CASP ROLL groups, that would be a remarkable result!
But what this also means is that there is absolutely no reason to submit a model that is just a refinement of a server model (no matter how good it scores). Even if we improve it by 0.1 GDT, the server will get the credit for "nailing" that prediction... not Foldit. There needs to be significant structural changes for us to get credit for it.

Again, I'll bring this up in the second half of the Scientist Chat tomorrow, but to answer auntdeen's second question: it is not worth coming up with a "second best" solution starting from the same server model unless the result is so different that you wouldn't be able to tell that it started from that model. (I hope that makes sense in the context of my previous paragraph).

We will always submit 5 completely different Foldit predictions, even if every single Foldit player all converged on the same server model and those models all scored way better than the next top-scoring Foldit model that was based on a different start.

spmm's picture
User offline. Last seen 21 min 41 sec ago. Offline
Joined: 08/05/2010
Groups: Void Crushers
thanks beta

so would it maybe help if we could have a sequence (extend?) denovo for the target that ran until the last minute and we can take our chances? As well as doing the template puzzles of course. Is the time too short or do we not do well enough without templates?

phi16's picture
User offline. Last seen 1 week 18 hours ago. Offline
Joined: 12/23/2008
Defining Goals

Structure predictions have greatly improved in the last two years. It is clear that use of a prediction server creates a good beginning. It is clear that Free Modeling is extremely difficult. It is clear that we are more likely to find a high scorer using prediction than without.

Questions:

1. Using Free Modeling, guided by intuition and scoring, can a player find a high scoring solution?

2. How often, using the past puzzles as examples, have players found good Free Modeling solutions?

3. We can't ignore the fact that predictive scores are over 90 and FM are at best 71. Are Fold.it's days numbered? Are we saying intuition is no longer as good as predictive analysis? Or is Fold.it about something else: learning why proteins fold. To me, using predictions is a form of cheating by looking at the answers, instead of trying to figure it out. We will learn very little, though the answers may be higher in the short term, by trying to follow predictions. Trying to do well in CASP may be counter productive for Fold.it at this point in time.

4. Exploration puzzles are an attempt to encourage diversity in solutions. Why not go all the way? In current exploration puzzles, everyone starts the same way and is encouraged to deviate from the given puzzle. Why not give everyone a different start? Players will complain that that just isn't fair because some will get good starts and some will not. But everyone is given a new random start and may go back as many times as you'd like for another.

5. I wouldn't be discouraged by a few false starts with new slider tools. Fold.it's future lies in being able to determine what are the forces at work in proteins forming. Fold.it needs to determine how and when these forces are applied. To do that new tools MUST be probed, tested, adopted or rejected. When can we expect to see them return?

Joined: 09/18/2010
Rethink scoring

Not sure if this is the right place but maybe the scoring could be changed to encourage the play towards what we want to achieve.

E.g. give bonus points to teams that hand in high scoring diverse solutions (no matter if solo or evo) and compensate for team size so small teams also get bonus points for diverse high scoring with small number of participants(as otherwise they'd be disadvantaged with their limited processing- and manpower).

Get Started: Download
  Windows    OSX    Linux  
Windows
(XP/Vista/7)
OSX
(Intel 10.5 or later)
Linux
(64-bit)

Are you new to Foldit? Click here.

Are you an educator? Click here.
Search
Only search fold.it
Recommend Foldit
User login
Soloists
Evolvers
Groups
Topics
Top New Users
Sitemap

Supported by: UW Center for Game Science, UW Department of Computer Science and Engineering, UW Baker Lab, DARPA, NSF, HHMI, Microsoft, and Adobe