Foldit for Leap
As I mentioned yesterday we are officially announcing a new version of Foldit for use with Leap Motion! One of our devs working on the project, Dun-Yu, answered some questions about the release.
How close are we to releasing Foldit for Leap?
We plan to release it in the next month or so.
What's different about Foldit for Leap?
It provides an opportunity to get closer to the true hand folding experience. You can manipulate the 3d geometry of the protein with your fingers and drag it to a true 3d position. You can essentially twist and move the protein as if you were manipulating it with your own hands. Like a real world object.
How long has the team been working on this?
Since last Fall.
What types of players might be interested in this?
People who like hand folding for example. It gives you better control when playing. For example, you don’t need to do so many camera rotations to position the protein. It's all pretty intuitive.
What’s the difference between Foldit for Leap and Foldit for Kinect?
Kinect allows multiple people to play the same protein but Leap caters to a one player experience. It's more precise and to top if off, you don’t need a big workspace. You are working very close to the screen (and your protein structure physically). You're using your hands instead of your whole body. With the Kinect you have to play at a distance of 2-3 meters from the protein itself, but with Leap you can get right up close to the protein. I have a video of some Leap gameplay from last year when we first started working on it that people might be interested in seeing.
What's also cool is that you can point your finger at the screen to make mouse clicks. Just another cool gesture for controlling the game.
What's required for the Leap version?
You'd just need to purchase the Leap camera, install their drivers, and run the Leap version of Foldit which will be coming out soon.
What has been the most difficult part of designing this new version?
It’s all about working through human interaction. For example, if I'm thinking about working in a pinch gesture, everyone pinches in slightly different ways. We are using a vision based method to track human gestures, and it’s a big task to accommodate all kinds of human gestures. Optimizing detection for everyone is never easy, but getting it to work is really rewarding.
Any last features that you think players will be excited to hear about?
We have some cool extra tools for Leap like the pin tool. It lets you pin part of the protein structure to the workspace and move the rest of the structure around it. Kind of like freezing sections but more handy when using your fingers to manipulate the structures.
Thanks, everyone! If you have any questions feel free to leave them in the comments.
- katfish( Posted by katfish 139 1839 | Thu, 05/09/2013 - 17:18 | 9 comments )
It’s time to work on some larger proteins.
We’ve just posted a new puzzle with 398 residues. This protein is too large for the normal Foldit client to handle. Because of this, we've introduced “centroid mode”. Where normal “full atom mode” puzzles have to calculate the score based on individual atom positions of all the sidechains, “centroid mode” uses approximations to speed up the process.
This new method of scoring is a radical change compared to the normal “full atom mode” score function. Tools such shake, mutate, and wiggle sidechains have been disabled because the data required to run them is not longer being generated. Other tools such as wiggle and clashing importance are ostensibly the same, but under the hood they are working in a very different way. You will find these differences require your Foldit strategies to change. We encourage you to try out new methods of working with the protein that take advantage of these differences.
One of the most apparent changes will be in how clashing importance works. In “full atom mode” when you turn the clashing importance down, there are many other elements of the score that normally take over and generally compress the protein. In “centroid mode” these scoring elements do not exist to the same extent. Because of this, you will see that the clashing importance slider does not perform exactly as it did before.
We would appreciate your input on these changes, and want to work with you to improve this new method of folding proteins. This is the first step towards Foldit players being able to tackle entirely new sets of problems that were untenable before.( Posted by tamirh 139 2274 | Mon, 04/29/2013 - 22:48 | 3 comments )
New achievement added!
Back in 2011 we congratulated Bletchley Park for being one of the first players to surpass the one million moves required for the Perpetual Motion Machine achievement. Two years later, BP has reached a whopping fifty million moves -- and then some! We've created a brand new achievement to mark the occasion...
Congratulations, Bletchley Park!( Posted by katfish 139 1839 | Fri, 04/19/2013 - 22:11 | 7 comments )
So, the CASP10 results have been up for a while on the CASP webpage as most of the natives have been solved and released.
If you looked at the results you saw that Foldit did quite well in the Refinement category!
The Template-Free category is always a very tough one, and unlike in CASP9, there was no amazing de-novo prediction in CASP10. In CASP9 there was one amazing prediction that was highlighted by the assessors: T0581. It had been generated by the RosettaServer and the best prediction came from the Void Crushers (this was in the NSMB paper). But this year it was a very tough category and no group really "nailed" any template-free target.
The Template-Based category was different, where lots of CASP10 teams were able to do well. These are the targets where there is already a close structure that you can start from, or many different templates. This category was a lot harder for Foldit, because unlike the other CASP10 teams (who get to use many bioinformatic tools) all we gave you was an extended chain and the Alignment Tool. Even with just that, though, many Foldit players were able to do very well. The main issue is that we have trouble selecting these models.
This was also an issue in the Refinement Category, as you can see in the table below:
Each row represents each of the a different CASP10 refinement targets that Foldit was able to participate in (some of the proteins were too big for Foldit).
The second column is the GDT of the starting model given to us by the CASP organizers.
The third column shows you the GDT of the best Foldit prediction in the set of models that was filtered by the WeFold team.
The last column is the GDT of the model that the CASP organizers deemed to be the best prediction from any CASP10 team (not just us).
You clearly generated some amazing predictions (most of them are a lot better than the starting refinement model!) and had we been able to pick them out, they would have beat the predictions selected by the other participants at CASP10. In terms of the refinement category, the last column highlights the winners of each of those targets, but clearly you generated better models than what they were able to pick out and submit! Unfortunately, we are very bad at selecting those solutions.
What is interesting is that you also seemed to get better as CASP went on, but that could be because the first refinement targets were smack in the middle of CASP10 (during the Template-Based and Template-Free puzzles) whereas eventually you were able to focus solely on the refinement targets.
Sadly, we've known for a while that we are very bad at selection, which is why three Foldit Groups asked us (before CASP10 started) to be able to pick out THEIR OWN Foldit group's submissions.
The next table shows you which CASP10 team submitted the best prediction for each of the CASP10 refinement targets that Foldit participated in:
Here, the second column is sorted by the "Improvement in GDT over the starting refinement model."
This table shows: how much did the very best prediction submitted to CASP10 (by any team) actually improve the starting refinement model that the organizers initially gave us.
It is obvious from this table that the FEIG group won the refinement category in CASP10, but you can see that a lot of their "winning predictions" didn't actually improve over the starting model that much.
[For anyone interested: Michael Feig's CASP10 team utilized many independent explicit solvent molecular dynamics simulations, which Foldit doesn't have access to, since Rosetta currently doesn’t include explicit water molecules]
But, if you look at which refinement predictions had the best improvements in CASP10: Foldit is at the top!
Anthropic Dreams, Void Crushers, and one of the WeFold branches were all able to find, select, and submit those amazing predictions! When Foldit wins, it wins big, but when we select poorly (because we were bold in our selections) then it really hurts us.
I think the take-away message is that selection is still the main issue... but that you are much better at it than we are! It is important to note that I'm sure the other CASP groups will argue that they too have this problem, and they generated better models as well that they weren't able to select.
This leads me to Hand-Folding:
The above figure is an RMSD plot for Puzzle 689b: Hand-Folding CASP10 T0711 Repost. It was a 33 residue freestyle puzzle, with templates, and had 3 disulfide bridges (which you totally got perfectly!). The rosetta energy (y-axis) is what you see in the game as it corresponds to the Foldit score (except that it is negative on this plot). On the x-axis is the RMSD representing how far from the native each Foldit solution is (RMSD = 0 is a perfect match, RMSD = 33 is completely wrong).
You can see above that the lowest Rosetta Energy—the top-scoring Foldit solution—the one that is easy to pick out by score, is actually one of the closest to the native.
Now compare that RMSD plot to the one for Puzzle 694: CASP T0711 Disulfide Repost Round 2:
This is the same plot as above, except for Round 2 of this puzzle (when Lua scripts and sharing was allowed) where you can see that if we now selected the best Foldit score (lowest Rosetta energy) it would not be the solution that is closest to the native. What we want is for the lowest points to be as far to the left as possible.
[The reason we reposted these puzzles after CASP10 is that when you formed the disulfide bridges during CASP10 it didn't score as high as if you didn't form them, so even though there were many Foldit predictions that were correct, they were not easy to pick since they were not the top-scoring ones. This has been fixed with the new disulfide bonus that was on these recent puzzles.]
These are obviously preliminary results (as this was only the fifth Hand-Folding puzzle we've posted) so we need to look into this some more, but this might explain why we were having so much trouble picking out your solutions that were closest to the native during CASP10.( Posted by beta_helix 139 3186 | Wed, 04/10/2013 - 21:27 | 0 comments )
We want your most interesting designs
We want to give you the opportunity to send us your favorite 691 designs. We realize that it might not be your top-scoring solution (there is a feedback about this), so you can send us your favorite manual saves and we will look at them. Details on how to send us your saves are in the comments of this news post. The deadline for submissions is Thursday, May 18th 17:00GMT.
If you want to know what we'll be specifically looking for in order to experimentally validate your designs, please read the Symmetric Protein Design Guidelines blogpost.
Looking forward to seeing all of your great designs.
NOTE: If you submitted your designs prior to 18:00GMT on April 10th, you will need to resubmit the solution.( Posted by tamirh 139 2274 | Tue, 04/09/2013 - 21:03 | 4 comments )