Competition results for influenza HA binder design

Friday, March 26 was the last day of our Influenza HA binder design competition! After Puzzle 1968 closed, we collected all of the solutions that were shared with scientists and tallied the valid submissions from each player.

The final rankings

LociOiling - 43 designs
CharlieFortsConscience - 32 designs
ucad - 21 designs
Dudit - 20 designs
spvincent - 10 designs
Bruno Kestemont - 10 designs
nspc - 8 designs
BootsMcGraw - 7 designs
silent gene - 7 designs
ichwilldiesennamen - 6 designs
akaaka - 5 designs
Enzyme - 5 designs
Galaxie - 5 designs
robgee - 3 designs
dcrwheeler - 3 designs
zippyc137 - 3 designs
Anfinsen_slept_here - 2 designs
OWM3 - 2 designs
irk-ele - 2 designs
NinjaGreg - 1 design
georg137 - 1 design
martinzblavy - 1 design
Jpilkington - 1 design
grogar7 - 1 design
alcor29 - 1 design
stomjoh - 1 design
Blipperman - 1 design
Norrjane - 1 design
phi16 - 1 design
infjamc - 1 design
sgeldhof - 1 design
blazegeek - 1 design

Congratulations to LociOiling, who submitted an astounding 43 designed binders for influenza HA!

What did we learn from this competition?

To recap, the aim of this competition was to trial an experimental reward system that encourages players to create the greatest number of quality designs, rather than focus on creating the single highest-scoring design (as in normal Foldit puzzles).

We think this could be a way to make Foldit more effective for protein design research problems, because Foldit is currently limited by design throughput (not by the quality of top-scoring designs). Optimizing for the highest Foldit score works well for protein prediction problems, but the problem of protein design is not so straightforward; a higher-scoring design is not always better. In addition, there is a secondary concern that competitive players tend to optimize solutions so tenaciously that late-game refinement exceeds the limits of our score function.

The competition puzzle was set up to mirror the previous Puzzle 1962: Influenza HA Binder Design: Round 3. Both puzzles used the same score function and Objectives. The only difference between the two puzzles was a scoring offset of 7,500 points (so a 10,000 point competition solution is equivalent to a 17,500 point solution in Puzzle 1962), and the competition puzzle ran for two weeks instead of just one. Using Puzzle 1962 as a control, we can look at the competition results to answer the two big questions about our experimental reward system:

1. Does the competition reward system actually increase throughput?
2. Are competition submissions still high-quality solutions?

Let’s start with question #2.

Are competition submissions still high-quality solutions?

Yes, competition designs appear just as promising as designs from regular puzzles.

This was largely enforced by rule #1 of the competition, which set a threshold of at least 10,000 points for all valid submissions. Foldit scientists chose this threshold based on the results of the previous Puzzle 1962. It seemed 10,000 points could be achieved only if you were able to satisfy most of the Objectives and also attain a reasonable base score.

Note that 10,000 points is still a very high bar for this puzzle, and most of the soloists in Puzzle 1968 were unable to reach this score. All of the players to reach this level have been playing Foldit for at least 6 months, and many of them are experienced veterans. (Bravo to akaaka, who joined Foldit in September 2020--the “youngest” Foldit player to submit a valid competition solution!)

We should also clarify that many solutions below the 10,000 point threshold are still scientifically valuable and will be analyzed by Foldit scientists as possible candidates for lab testing. The 10,000 point threshold does not represent a cutoff for “scientifically useful” solutions. Rather, past this threshold we think further optimization is not very helpful, and a player could contribute more to research by working on another solution.

So, we know that all of the valid submissions scored at least 10,000 points, which should correspond to promising designs. But let's spot check a couple of values to be certain they are reasonable...

Among valid solutions, the worst DDG value was -32.4 kcal/mol, and the worst Contact Surface value was 336. While these values do fall short of their targets (DDG < -40; Contact Surface > 400), these are still promising numbers that could indicate a successful binder. The majority of submissions met the targets for both of these difficult binder design Objectives.

This gives us confidence that the 10,000 point threshold was stringent enough to ensure that all submissions were high quality designs. Note that Foldit scientists will still run additional analyses on these solutions before selecting designs for lab testing.

Does the competition reward system actually increase throughput?

Yes, players created quality designs at almost triple the rate of a normal puzzle.

After any Foldit puzzle closes, we comb through all the puzzle solutions to pull out distinct designs, using protein sequence and structural alignment to sort out duplicate and unfinished solutions. After the competition puzzle ran for two weeks, we identified 242 distinct solutions with at least 10,000 points (this includes solutions from players who opted out of the competition and played Puzzle 1968 normally). By contrast, in one week our “control” Puzzle 1962 yielded 43 distinct protein designs above the equivalent score threshold. Accounting for the difference in puzzle duration, this works out to a rate increase by a factor of 2.8x.

This is a good sign! It indicates that Foldit does have the capacity for greater design throughput, and that a tweak to our reward system could make Foldit more effective for research in protein design. However, the experimental system used here may still need some adjustments...

Was the “puzzle reset” rule effective against duplicated work?

Mostly. But there were several instances where a player, after submitting a solution, restarted the puzzle and rebuilt almost the exact same solution from scratch!

The puzzle reset rule was intended to force players to make multiple distinct designs. Without this rule, we were afraid that each player would make only a single 10,000 point solution, and then repeatedly submit it with trivial changes. In effect, this would boost their competition standing without actually making a meaningful scientific contribution.

Nevertheless, there were some cases where a player submitted two valid solutions with almost the exact same sequence and structure, even though they were designed completely independently after a puzzle reset. This strategy circumvents the purpose of the puzzle reset rule. If we want a reward system that accurately reflects the scientific contribution of each player, we will need to make some changes to the system used in this competition.

A successful experiment

Congratulations again to our champion LociOiling and all of the players who participated in the competition!

One thing that is still missing from this analysis is player feedback. We invite all players (participants and observers) to leave a comment below with your thoughts about this competition. Was gameplay significantly different than in normal puzzles? Did you enjoy it more or less? Do you have suggestions that would make this kind of competition more fun, or more productive?

Keep up the great folding, and practice your binder design skills in the latest Puzzle 1973: Tie2 Binder Design: Round 1!

( Posted by  bkoep 71 398  |  Sun, 03/28/2021 - 20:59  |  17 comments )
Joined: 12/06/2008
Groups: Contenders
Any help to prevent duplicate solutions?

"...there were some cases where a player submitted two valid solutions with almost the exact same sequence and structure, even though they were designed completely independently after a puzzle reset."

Considering that I submitted eight valid solutions and was credited for only seven, I am going to guess this was the case.

Does anyone have a script that compares two solutions to see how much they have in common? I might have spent that entire evening developing another design, had I known.

robgee's picture
User offline. Last seen 5 hours 43 min ago. Offline
Joined: 07/26/2013
Compare 2 solutions

These 2 recipes could work to help compare solutions:
You can copy/paste their output into a text file.

SS Edit 2.0.1 LociOiling ::
AA Edit 2.0.1 LociOiling ::

Joined: 12/06/2008
Groups: Contenders
10,000 points was a good target score.

I made at least thirteen attempts to meet the 10K minimum score; not all were successful. The ONLY solutions I had that met the minimum score for submission had the full DDG and SC and one or fewer BUNS and one or fewer bad loops (but not both BUNS and bad loops).

The criteria were challenging, and only mildly frustrating. 8/10 would play, again.

robgee's picture
User offline. Last seen 5 hours 43 min ago. Offline
Joined: 07/26/2013

10k was a good target.
Took me a week to get 1st solution.
11 attempts for 3 solutions.
Gameplay difference:
>> endgamed for way less time <2hrs.
Enjoyment :
>> more fun 'cause it was a challenge,
>> also more frustrating but in a good way.
More productive:
>>Lol ! you got x2.8 more solutions ,
>> how many more do you want !! :p
Challenging but fun, would play again.

spvincent's picture
User offline. Last seen 5 hours 55 min ago. Offline
Joined: 12/07/2007
Groups: Contenders
I enjoyed this puzzle: I

I enjoyed this puzzle: I think the format is preferable to the limited-move style of puzzle we had previously, where there was something of a feeling of being "rushed" (not in a time sense clearly, but rather the move limit acted as a disincentive to backtracking when midway though a puzzle).

I thought I'd submitted 13 solutions. Turns out I forgot to upload one (oh well) but I was wondering about the other two, which maybe were flagged as invalid for some reason. Failure to reset properly perhaps, although I thought I was pretty careful about that.

Look forward to more puzzles like this.

Joined: 05/03/2009
Groups: Contenders
This was a most welcome diversion

I liked this puzzle a lot, as it gave us all the benefit of time. So often, I've wished 'if only I had longer...' so when this puzzle was posted, I was instantly drawn. It ended up feeling like a competitively meaningful sandbox puzzle. It took me a week to post 4 solutions, but I started to get a feel for what worked and what didnt, which guided my approach and allowed me to refine the process so that I was able to churn out 3 or 4 solutions a day.

The vast majority of my 33 were tri-helical bundles. I had a couple of quad helices, and a couple of tri-sheet with bi-helicals nestled underneath. I began to notice a sweet spot that appeared to satisfy the DDG and Contact bonuses repeatedly, so I concentrated on preparing solutions that consistently held that successful helix in place and then varied the other 2 helices by one or 2 sidechains to make them different enough to qualify. And you can see the results here -

And I too, inadvertently created a duplicate, by mishandling the blueprint setup at the beginning of the process and not realising at the time. Interestingly, those 2 solutions ended up differing by one single residue by the end.

And I also want to send congrats to LociOiling. I thought I was in with a shout with 32, but it was not to be. Well played sir.

Joined: 09/24/2012
Groups: Go Science
wow I like the video


And congrat for your big number of winning shares, and rank 2

Joined: 09/24/2012
Groups: Go Science

Hahaha " But there were several instances where a player, after submitting a solution, restarted the puzzle and rebuilt almost the exact same solution from scratch!"

LOL we players will always try to find a way to "gain" a competition even if it's not scientifically interesting. Just for fun and/or if it can save us time.

At the end, I developed a kind of "industrial" design production with a succession of always the same stategies/actions/recipes.

I feel I'm favoured with a (new) multicore computer. This kind of competition might disfavour owners of old computers (with few clients).

Suggestion: you could try to "correct" this competitive advantage by only considering (for competition, not for science) a maximum of 1 design per day.

Gaming suggestion: is there a way to reward the valid shares to scientists ? For example by giving a +1 final bonus point for each valid share, as a separate "puzzle" named "bonus for puzzle x". In this case, LociOiling would now gain 43 global points etc.

LociOiling soloist score, and mine, would change as follows:
LociOiling 3538+43=3581
Bruno Kestemont 3528+10=3538

Or a build in bonus system that would "recognise" good shares and immediately reward points in the puzzle score. It would be amazing to discover afterwards that a gaining player actually didn't find the best score solution but "only" gained a lot of sharing rewards. That would make the competition strategies more elaborated than simply trying to get the highest score.

ucad's picture
User offline. Last seen 4 hours 54 min ago. Offline
Joined: 03/16/2020
Groups: None
Does or should Foldit mine

Does or should Foldit mine for amino acid sequences during gameplay? Recipes used to mutate the higher scoring solutions must be generating many unique sequences still over 10k.

Perhaps a few endgame recipes/features that work based off amino acid conservation and point loss thresholds would be useful. Ones that generate a cloud of unique mutated solutions rather than grind away without mutating.

NinjaGreg's picture
User offline. Last seen 4 days 19 hours ago. Offline
Joined: 05/21/2010
Groups: Go Science
Great idea

Up until now, I liked having three puzzles going at once so that when one got to the endgame I could give it less attention, but was less interested except for the score. With this puzzle, it was fun to try different initial designs out.

I like Bruno's suggestion of only considering one submission per day, so those of us that have slower computers can still compete.

I did try three designs. The second one couldn't score high enough, the third I think I forgot to submit.

Count me as a favorable response!

nspc's picture
User offline. Last seen 4 hours 17 min ago. Offline
Joined: 03/26/2020
Groups: Go Science
NinjaGreg. Dont forget the goal of the puzzle.

Foldit is more a cooperative game that a competitive one.
The goal was to improve the number and the quality of solutions.
Of course I like competion too, and leaderboard is like an incentive.
If some players have a little avantage with 3 other top players because they have a better computer, I dont think it is a real problem.
We can have a good rank with less than 10 solutions, and it is very nice.
It is good if a player make a lot of solutions, I am happy for LociOiling, and I hope it helped scientists.
If we want to make sure we dont make always a similar solution, maybe points in this kind of competition should be
in realy validated solutions by scientists.
If 2 solutions are very similars maybe scientist will select only one.
So maybe it will encourage players to play more for science and try more diversity.

The experience was good for me, I tried new solutions each days.
I begined 15, and 8 reached the 10000 points. It is interesting, because now we ask differents questions like :
"How can I improve creating a protein easier ?" "Maybe I need new tools to be more efficiants ?"
So I try to new approachs.

agcohn821's picture
User offline. Last seen 1 day 19 hours ago. Offline
Joined: 11/05/2019
Groups: Foldit Staff
Thank you

Hey folks--thank you all so much for the feedback here, these are all so helpful for us on the team! Questions have also been passed along--super appreciated!

Joined: 12/27/2012
Groups: Beta Folders
a few notes...

Bruno is right about the "industrial" part of this competition.

I started with the primary and secondary structure of my solution to puzzle 1962. This solution was a standard three-helix bundle.

I used AA Edit and SS Edit to apply my 1962 results to a fresh start.

Using idealize secondary structure and cutpoints, I dragged the three helices into rough alignment. I also had the solution from 1962 open in another client as guide.

After stabilizing with shake, wiggle sidechains, wiggle all, Fuzes, etc. I moved the helical bundle close to the target. I used the glycan (the giant sidechain at segment 13) on the target as a rough guide. In some cases, I closed the cutpoints early on, in others I left them open until later.

After getting everything somewhat aligned and somewhat stable, I tried various steps, all on low wiggle power at the start. In some cases, I did something like Mutate No Wiggle or Mutate Combo early. In other cases, I did only a little Mutate All or even went directly to a version of DRemixW.

Some of the solutions went to 10,000 while still on low, others ended with only 9,400 or so.

In most cases, a GAB recipe on medium wiggle power was the next step, followed by a Microidealize or a Cut and Wiggle. Then I moved to something like Worm LWS or Banded Worm Pairs Infinite Filter as a finishing recipe.

I haven't looked at all my solutions in detail, but I did keep a spreadsheet with the primary structure of each one. From the spreadsheet, I can see that the second half of my solutions all had "kewl" in the same spot, so not as much variation as in the first half.

The industrial process really applied to the second half, when I used less variation in the steps and spent less time on each step. So there was less chance for diversity.

I ended up submitting 48 solutions, so it looks like 5 were rejected as duplicates, or due to user error on my part.

I liked having two weeks to work on solutions, even if the more creative part happened in the first 10 days. This puzzle let me practice my handfolding and rethink the sequence of recipes that I use on many puzzles.

There were some challenges in managing so many solutions. I'll put those in a separate post.

Joined: 12/27/2012
Groups: Beta Folders
problems with lots of solutions...

The Open/Share Solutions dialog makes it difficult to keep track of multiple solutions. It could use additional filters (such as showing only solutions shared with scientists) and sort options (sort by date, or ascending/descending score). Also, a resizable window and maybe a search box would help.

In a couple of cases, I accidentally shared with group instead of with scientists. I found and fixed at least two or three like that, hope I didn't miss anything.

I started adding a segment note on segment 84 to identify each new start. This is probably a good idea on any puzzle where you start more than one solution.

The segment notes showed another problems. In at least a couple of cases, the solution seemed to change in mid-recipe. So I'd start with solution 6, but solution 3 would be back when the recipe finished. I called that "bleed through".

To get around that issue, I started each new solution in its own uniquely-named track. That stopped the bleeding.

The bleed through issue may result when a recipe does something like restoring a quicksave slot that it hadn't previously saved. Or perhaps restoring recent best without first setting recent best. Or perhaps there's a bug, with recent best one of the usual suspects. I've been seeing a similar problem in Banded Worm Pairs Infinite Filter, but I haven't tracked down the source.

I should have taken notes about each solution, such as which recipe was used and how long it took. I used to keep a spreadsheet like that in early days, but stopped in early days.

I also didn't make as many interim saves as usual, which became a problem in some cases. The limit of 10 solutions shared with self and 10 shared with group was one issue, but open/share solutions getting cluttered was the bigger concern.

Next time, I'd make more local saves just in case something bad but fixable happens. I'd also get back to taking progress notes, maybe using Google Sheets or something similar.

Joined: 09/24/2012
Groups: Go Science
quicksave bug in bwpi

Indeed in banded worm pairs infinite filters, there is a "quickload" bug I didn't find time to fix yet. More than a year that it is on my To Do list ;)

infjamc's picture
User offline. Last seen 3 days 59 min ago. Offline
Joined: 02/20/2009
Groups: Contenders
I wonder if 10k is sufficiently high as a threshold?

As I had previously mentioned here, I was able to produce a solution that was technically "abusing the score function"... Despite missing the intended binding site, it was able to score 10,321 (for an 85th percentile finish) thanks to the following:

  • The design hydrogen-bonds to the beta sheet via another beta sheet in the anti-parallel direction; and

  • The binding interface was stuffed with large side chains, which was enough to max out the DDG and Contact Surface bonuses. (With so many exposed hydrophobics, I would expect the structure to misfold in reality.)

Given the possibility of this exploit, I wonder if the bonuses from the objectives should be reduced, or perhaps the 10,000 threshold should be raised further? Or would it make sense to magnify the the interaction energy / hiding penalties for this type of puzzle?

Joined: 09/24/2012
Groups: Go Science
Idem for groups

The current competition was quite limited to soloïsts.

Due to the "industrial" character and the many inventive industrial processes I read here, I'm thinking on the same competition, but for groups.

-industrial soloïsts design a lot of first diverse hand folds, and share them to group

-industrial evolvers use computer power to evolve them, one by one, to 10k+ and share them to Scientists

-group as well as all individual players involved are rewarded for end game number of scientifically valid shares

(I know Foldit is able to recognise all contributing players in a puzzle, I don't know if this can be automatized "on the fly" for scoring).

User login
Download links:
  Windows    OSX    Linux  
(10.12 or later)

Are you new to Foldit? Click here.

Are you a student? Click here.

Are you an educator? Click here.
Social Media

Only search
Other Games: Mozak
Recommend Foldit
Top New Users

Developed by: UW Center for Game Science, UW Institute for Protein Design, Northeastern University, Vanderbilt University Meiler Lab, UC Davis
Supported by: DARPA, NSF, NIH, HHMI, Amazon, Microsoft, Adobe, Boehringer Ingelheim, RosettaCommons