Ways to encourage more exploration on each puzzle:

Case number:845813-2010728
Topic:Game: Tools
Opened by:jeff101
Opened on:Friday, November 6, 2020 - 08:42
Last modified:Saturday, November 14, 2020 - 04:43
Below are some ideas to encourage more exploration on each puzzle:

(1) Make energy vs rmsd dot plots for each player 
so they can see the range of solutions they explored.

(2) In the Open/Share Solutions Menu, list the 
rmsd values between each solution and a "hub" 
solution that the player can pick.
(Fri, 11/06/2020 - 08:42  |  8 comments)

jacob_n's picture
User offline. Last seen 2 weeks 1 day ago. Offline
Joined: 11/26/2012
Groups: Go Science

It would be interesting to see how unique our structure is compared to all other players. I recall that all the solutions get clustered on the server-end, so I've been curious about what that would look like.

jeff101's picture
User offline. Last seen 1 day 6 hours ago. Offline
Joined: 04/20/2012
Groups: Go Science
The following show some dot plots:

Below show dot plots from Foldit's
Partition Function Tournament:

Odds are you can find more dot plots
if you look around the Foldit site
and its Wiki pages.
Joined: 09/29/2016
Groups: Gargleblasters

Pretty sure the function Jeff wants to return is the old "Exploration Map", except on a per-user basis instead of just the Top Users.

Exploration Map is still in-game, albeit non-function. Accessible via the "Social" menu (hotkey 'N' I think is default?) and may indeed be viable to function in the way Jeff's after. (This was a feature that was before my time, just seen it often when I accidentally open Social Menu lol)

jeff101's picture
User offline. Last seen 1 day 6 hours ago. Offline
Joined: 04/20/2012
Groups: Go Science
The other day, I think during the Office Hour, there was a 
conversation about exploring more instead of doing so much 
end-game refining and how best to encourage exploration.

I think Sketchbook Puzzles were mentioned, but I personally
don't like this kind of puzzle. I tend to favor recipes 
over hand-folding, and most of my favorite recipes use many

The ideas (1) and (2) above involve rmsd values. Usually
Foldit dot plots show CA-CA (alpha-carbon to alpha-carbon)
rmsd values, but such values neglect the positions of
sidechain atoms. Solutions with the same CA-CA rmsd value 
may give a wide range of Foldit scores because the side-
chains are positioned differently. This would let such
solutions look like vertical lines in Energy vs rmsd
dot plots. Perhaps our solutions would look more diverse
if all-atom rmsd values were used instead. These would
at least show more clearly the effects of changing the
positions of sidechains.

Also, for Design Puzzles that allow mutations, you would 
need to replace rmsd values with something analogous that 
instead accounts for both differences in atom positions 
and differences in the protein's primary sequence.
jeff101's picture
User offline. Last seen 1 day 6 hours ago. Offline
Joined: 04/20/2012
Groups: Go Science

(3) One way to discourage end-game refinement would be 
to limit our use of tools that are popular for end-game
refinement. It seems like many end-game strategies use
auto, medium, or high wiggle power. It also seems like
many end-game recipes use lws (does that mean local 
wiggle selected?). Are there other things often used
for end-game refinement? Perhaps having puzzles with
low wiggle power only and no lws would encourage more
exploration and less end-game refining.

Joined: 09/29/2016
Groups: Gargleblasters

My idea I mused about for deturing end-gaming, was to give us a "time allowance" for how long we could run Recipes. Doesn't matter WHICH recipe, whether it was end-game focused or early-game... we'd still only be allowed to run recipes for X-number of hours because they'd be disabled.

I figured this would make people hand fold more and use their recipe time more sparingly, instead of throwing something together, feeding it to a recipe on 4 different clients, and checking on it once in awhile to change recipes or settings.

jeff101's picture
User offline. Last seen 1 day 6 hours ago. Offline
Joined: 04/20/2012
Groups: Go Science

Dang Formula, "throwing something together, feeding it to a recipe on 4 different clients, and checking on it once in awhile to change recipes or settings" is practically the way I play Foldit! I have found that I can get pretty diverse solutions this way. They might be more diverse than the solutions hand-folders get by doing one structure/track at a time and starting over several times per game. Having energy or score vs rmsd plots for each player (as in (1) above) would help us assess how much we explore in each puzzle, and we could fine-tune our strategies to improve this.

One way to get more diversity despite using recipes is to set up multiple initial conditions. For example, on puzzles known to have disulfide bonds, I often use bandsomeSS (https://fold.it/portal/recipe/101275) to set up as many disulfide combinations as I can. Then I run the same sequence of recipes on these different initial conditions. Before long, certain solutions are doing much better than others, and I can pick and choose which ones to keep improving. If I keep good records, I can compare how any two solutions are/were doing at a certain stage of my overall sequence of recipes and make predictions about which solutions will do better at later stages.

Also, some recipes and Foldit tools have built-in randomness so that even if you run them again from the exact same initial condition, you can get a very different result. For some short steps in my overall procedure, I have found that re-running the step from where it began can give a very different result, so sometimes I try the same recipe and initial condition 2 or 3 times before picking the best result to continue with in the next stage.

Joined: 09/29/2016
Groups: Gargleblasters

No, you're right lol It's certainly a matter of how one goes about it. I phrased it a bit harsher than I meant it to be <_>
What I was trying to say was that there are a lot of recipes that, particularly right now with the shift in scoring through the new metrics we've given, their results aren't as reliable as they have been in the past. They're trying to improve the main score which is all they can really "see" right now (without tweaks to code, which will take time for the community to sift through and redo) and that's resulting in solutions that are less than ideal. Also can't leave out the problem so many have when it comes to being able to come back to a functioning client since Remix is so crash-happy. (And based on what Milkshake has said, is NOT going to be an easy or quick fix unfortunately)

Or at least that's been my experience with recipes lately, and I think that still conveys my point since I'm a good comparison to the normal users, particularly when it comes to configuring the recipes :P There's so many bells and whistles that don't seem to be well documented, that I think a lot of us just leave them on their defaults and cross our fingers.

All in all, I'm able to quickly churn out designs in a couple hours by hand, that have as good of score and metrics as what I'd spend all week on (and generally, based on shared screenshots, better metrics than those with much higher scores). Which is also more in line with what they are asking from us: more results, no end-game recipe use.

If I had the patience to run this sort of experiment, once I hand folded something to 'completion' I'd then feed it through a few recipes, to see if it could actually improve it while maintaining the metrics. I don't want to babysit it so that once it crashes, I can restart it :(

[/personal opinion]


Developed by: UW Center for Game Science, UW Institute for Protein Design, Northeastern University, Vanderbilt University Meiler Lab, UC Davis
Supported by: DARPA, NSF, NIH, HHMI, Amazon, Microsoft, Adobe, Boehringer Ingelheim, RosettaCommons