Foldit design update - Part 1

It's been a long time since our last update on Foldit protein design! Here we lay out some recent progress and highlight the latest improvements in proteins designed by Foldit players.

Local Backbone Quality

Unlike designed α-helical bundles, which Foldit players have mastered with relative ease, the design of α/β folds has proven to be more problematic. For some time, we've suspected that the crux of the problem lies in unfavorable local backbone conformations. In particular, we found that the α/β proteins designed by Foldit players seemed to have loops that are never observed in natural proteins.

The Ideal Loop Filter, which was introduced last June, has helped Foldit designs remarkably. And in subsequent updates spanning the last several months we've seen further improvement in the backbone quality of Foldit-designed proteins. The box plot below shows the average local deviation from natural protein backbones in top-scoring Foldit designs. (Imagine breaking up each designed protein into 9-residue fragments, for each fragment searching natural proteins for a fragment with a similar backbone, and then measuring the RMSD to the closest match. If every backbone fragment of a design has a close match in a natural protein, that design should have a low mean RMSD; if there are regions of the design that have an unusual backbone, the design will have a higher mean RMSD.)

You can see that backbone quality in Foldit designs improved significantly after imposing the Ideal Loop Filter; disabling Rebuild; adjusting the IdealizeSS torsions; and introducing the Blueprint Panel. The dotted line marks a reference value from successful Baker lab designs; all designed proteins from Koga et al. fall below that line. In the latest design puzzles with the Blueprint Panel, we see that most high-ranking Foldit designs also fall below that line.

Rosetta@home Folding Funnels

The improvement in Foldit backbones is reflected in other types of analysis. With the improved backbones, Rosetta@home is better able to predict the structure of Foldit designs from their amino acid sequences (explained here).

Below is a set of 14 Foldit player designs that were successfully folded by Rosetta@home—all but one originate from puzzles using the Ideal Loop Filter. The strong "funnel" shape of each plot indicates not only that Rosetta is able to sample the intended fold (note the numerous red points with RMSD < 2 Å), but also that Rosetta predicts the intended structure to be the most stable. Compare these folding funnels to those of earlier α/β designs.


mimi, Mark- (Contenders) — Puzzle 1245


Bletchley Park, Mark- (Contenders) — Puzzle 1248


tokens, Galaxie (Anthropic Dreams) — Puzzle 1251


tokens, Galaxie (Anthropic Dreams) — Puzzle 1257


tokens (Anthropic Dreams) — Puzzle 1257


dcrwheelerPuzzle 1263


fiendish_ghoulPuzzle 1285


gitwut(Contenders) — Puzzle 1290


Bletchley Park, Cyberkashi, Mark- (Contenders) — Puzzle 1294


Hollinas, Bruno Kestemont, Scopper (Go Science) — Puzzle 1294


tokens (Anthropic Dreams) — Puzzle 1294


fiendish_ghoulPuzzle 1297


fiendish_ghoulPuzzle 1299


retiredmichael (Beta Folders)— Puzzle 1299

Each of the designs above has been reverse-transcribed into synthetic DNA, which is inserted into E. coli and expressed in our lab for further testing (read more about lab testing here). However, in the list above I've omitted four particularly promising designs that are already showing encouraging results. Next week we'll post a follow-up with more information about those designs, alongside some brand new experimental data.

A big thank you is due to all the Foldit players who have been designing proteins every week! We're learning a lot about protein design from your contributions, and credit goes to all participants—not just to those players acknowledged above. We appreciate your patience and persistence as we experiment with new tools and filters. Keep up the great folding!

( Posted by  bkoep 86 1365  |  Tue, 02/28/2017 - 19:34  |  7 comments )
3
Joined: 09/29/2016
Groups: Gargleblasters
:D

While I can't help but admit and thus accept that a 'pair of surfing hotdogs' (as I call them) is such an ideal/preferred design, I will still continue to create my far more curious, quirky, and outright bizarre designs. After all I'm having fun in the end, and at least they typically rank in the top 40 (usually w/ little to no recipes ran)... granted, only just making said 'Top 40', but it makes me proud regardless! :P

Nevertheless, congrats to those above with successful ones! And who knows, maybe mine is one of said 4?? .... HA! Right. lol

Joined: 09/29/2016
Groups: Gargleblasters
.

Oh and of course, thank you bkoep, for the writeup!

NinjaGreg's picture
User offline. Last seen 15 hours 28 min ago. Offline
Joined: 05/21/2010
Groups: Go Science
Very interesting!

I was wondering how the various improvements were being evaluated. Thanks for the explanation, bkoep! Since we love feedback, is there any way to integrate this knowledge into the tools we have, or perhaps some new ones?

bkoep's picture
User offline. Last seen 13 hours 42 min ago. Offline
Joined: 11/15/2012
Groups: None
Knowledge integration

It's not clear to me what you'd like integrated into Foldit; do you have specific ideas of things you'd like to see?

The major insight here is that backbone quality matters, and that Foldit players tend to produce much better backbones with the Ideal Loop Filter. This knowledge is already being integrated, in the sense that this configuration of design puzzles (with Ideal Loop Filter, no Rebuild, etc.) is likely to stay around.

If you're asking about incorporating higher analyses into Foldit (e.g. fragment RMSD, Rosetta@home funnels), unfortunately these calculations tend to be expensive, and are too slow to run on-the-fly while playing a puzzle.

We have other ideas for improving things further (mostly sequence-related), but no new tools planned.

Joined: 03/05/2015
Groups: Gargleblasters
This is great information

bkoep, I appreciate you taking the time to write this up and post it. Getting this kind of feedback on how puzzle results are being used in the lab would be wonderful to see more of - I would also be interested in learning if and how these studies are helping "outside the lab". Looking forward to part II!

Joined: 09/24/2012
Groups: Go Science
Waw ! impressing improvement !

Correct me if I'm wrong in interpreting the pictures:
The first picture: it's statistically much better from left ("old" puzzles) to right (most recent ones with blueprint). All players get better and better results with the introduction of new filters and tools.

Testing the best designs as "de novo" for Foldit@home
1) You take a set of the best scoring "same" primary design from one of the top players for this puzzle. This is shown in green on bottom left of the pictures.
2) You give one primary design to Foldit@home and they come with a mass of "final" 3D results, each of them beeing a red dot on the picture. On the right are final 3D structures that deviate enormously from the Player's design. On the left are results that are similar to the player's design. Moreover, on top are many low scoring resulting 3D designs, either many unfinished either some sticked in a relatively low scoring "local minimum". Each bottom right red dot is a relatively stable 3D design BUT not the same as the Player's design. Only the upper bottom left is very close to the given (green) design AND with a perfect energy score (never better than the Player's design in the given examples).
3) If a red dot on the bottom-left end of a nice funnel overlaps with a green dot, you get it ! You are sure the Player's design is the one you'll have in the lab. Then you can predict the behavior and function of this "synthetic" "perfect" protein.

Funels
It's better from top pictures to bottom pictures. We see that rms goes much to the left (0 is "perfect matching", <2 is great).
Puzzle 1245: base line (a nice red funnel but low best Foldit@home scores; the player's design is far better scoring)
Puzzle 1248, 1251, 1257: base + ideal loop filter (not so nice funnel, "unstable", not sure it'll work in the lab). 1263 comparable with 1245 (also 70 residues) but Foldit@home finds a better score, closer to the Player's one.
Puzzle 1285 to 1299: base + ideal loop filter + No Rebuild (compare 1285 with 1248 and 1290 with 1251 - Foldit@home finds a better score, closer to the Player's one). 1299 bottom has a much better, almost perfect funnel and score than 1299 up.
No result illustrated:base + ideal loop filter + No Rebuild+ ideal SS torsion
No result illustrated:base + ideal loop filter + No Rebuild+ ideal SS torsion+ Blueprint
No result illustrated:base + ideal loop filter + No Rebuild+ ideal SS torsion+ Blueprint + Helix restrictions
All of this is very encouraging.

bkoep's picture
User offline. Last seen 13 hours 42 min ago. Offline
Joined: 11/15/2012
Groups: None
Yes!

I think you've got the main points. A couple of minor corrections:
• The box plot at the top does not represent all player designs; only the "best-scoring designs" (essentially, the top 10 solo and evolver designs for each puzzle).

• The green cluster in each funnel plot is not a "set" of Foldit models. The models in this cluster reflect small perturbations to the original design, and are meant to represents the "local energy landscape" of the original design.

• The red dots do not necessarily have to overlap with the green dots to imply a positive result. For proteins of this size (between 65 and 90 residues), any red dot with RMSD <1 Å is essentially a dead-on match with the design structure.

• In addition to your qualitative assessment of the funnels, note also that in more recent puzzles we tend to see multiple positive hits coming from a single puzzle (see also Part 2).

Get Started: Download
  Windows    OSX    Linux  
Windows
(Vista/7/8)
OSX
(10.7 or later)
Linux
(64-bit)

Are you new to Foldit? Click here.

Are you a student? Click here.

Are you an educator? Click here.
Search
Only search fold.it
Recommend Foldit
User login
Soloists
Evolvers
Groups
Topics
Top New Users
Sitemap

Supported by: UW Center for Game Science, UW Department of Computer Science and Engineering, UW Baker Lab, VU Meiler Lab,
DARPA, NSF, NIH, HHMI, Microsoft, Adobe, RosettaCommons