Influenza HA binder design competition

We are announcing a special competition for the newest binder design puzzle! We are challenging players to design as many binders as possible for influenza hemagglutinin (HA).

Unlike puzzle rankings, your competition ranking will NOT be determined by your best score in the puzzle. Instead, the winner of the competition will be the soloist player that submits the greatest number of valid solutions before the puzzle closes March 26 at 23:00 GMT.

There are two rules for a valid submission:
1. The solution must have a score greater than 10,000.
2. You must reset the puzzle for each submission.

Rule #2 means that each submission must be restarted from scratch, and no work may be shared between submissions. Foldit keeps track of each solution's history, and we will reject multiple submissions that come from a common "intermediate" solution. Loading a saved solution or clicking on the Undo Graph will NOT reset the solution history. You must use the Reset Puzzle button to begin each new submission from scratch.

To participate in the competition, simply submit each 10,000 point solution using the Upload for Scientist button in the Save Menu, and include the word “submission” somewhere in the upload title. For logistical reasons, we will only consider soloist solutions in the special competition. Evolved solutions from two or more players will not count as valid submissions.

The competition rankings and submissions will be showcased in a special blog post after the competition ends. The winner will be highlighted in the April 2021 Lab Report, where bkoep will take a close look at the designs from the winning player.

Note that Puzzle 1968: Influenza HA Binder Design Competition will also function like a regular puzzle. If you do not want to participate in the special competition, the puzzle will still reward points as usual, based on your best score when the puzzle closes.

The backstory: Protein design throughput

This competition will serve as a kind of experiment for Foldit, as we think about ways to make Foldit more effective for scientific research.

Currently, one of the big problems facing protein design in Foldit is throughput. We simply aren't generating enough designs to test in the lab. For a typical binder design experiment, we can expect a success rate of about 0.1% for binders that satisfy all of our binder metrics. That means we need to test thousands of designs in order to find a hit, and a typical Foldit puzzle only produces a couple hundred designs.

At the same time, we suspect that a lot of late-game optimization in Foldit design puzzles is wasted effort, and this work may not actually improve the final protein design. We’ve noticed that, after initial construction and refinement of a protein design, many players resort to heavy-duty scripts that run for days on end, making tiny changes to squeeze out the last few points and get to the top of the puzzle leaderboards. If that late-game optimization does not lead to higher-quality designs, then we would like to redirect that effort towards new designs.

Move limits

In the past, we've experimented with the Move Limit Objective as a possible approach to this problem. The Move Limit prevents players from spending time running heavy-duty optimization scripts, because these scripts will quickly burn through the allotted moves. We had hoped this would refocus player efforts toward multiple puzzle attempts.

While this seems to be moderately effective, the Move Limit has some problems. There's no strong incentive to actually restart a puzzle once you hit the Move Limit. It's also difficult to calibrate the actual number of moves that should be allotted, since different players with different play styles will naturally require different numbers of moves to make a good protein design.

A different approach

A more radical, but more direct, approach is to revise the overall reward system in Foldit (at least for protein design problems) to encourage multiple solutions for each puzzle. In this kind of system, the goal of competitive players (make many good designs) would be better aligned with the goal of Foldit scientists (test many good designs). So, instead of awarding points based only on your best score, perhaps we should award points for multiple high-scoring solutions.

This competition will serve as a kind of pilot experiment for such a reward system, where rankings reflect the number of solutions contributed to each research problem. We’ll be looking to see how this system impacts puzzle results, and whether it has any unintended effects on gameplay. (Of course, we also hope this competition will produce lots of binder designs for influenza HA!)

The competition will remain open for two weeks. Players will have until March 26 to create as many 10,000 point solutions as possible. Play Puzzle 1968: Influenza HA Binder Design Competition now!

Edit: See the followup blog post for the final results of this competition!

( Posted by  bkoep 62 503  |  Fri, 03/12/2021 - 23:48  |  5 comments )
robgee's picture
User offline. Last seen 3 hours 9 min ago. Offline
Joined: 07/26/2013
Sounds good

Nice, attack the problem directly.
I'm in.
Hope it works out, sounds like fun as well.

nspc's picture
User offline. Last seen 1 hour 54 min ago. Offline
Joined: 03/26/2020
Groups: Go Science
Nice ^^

It will create more diversity and players that played for score will now play more for science ^^.
Maybe that it will take several rounds to see improvements, because it can change the way some people play.
I just started to learn how to make recipes now, I think we will need some tools more suited to our old way of playing.

infjamc's picture
User offline. Last seen 2 days 20 hours ago. Offline
Joined: 02/20/2009
Groups: Contenders
Another radical idea...

Perhaps another possibility to consider in the future is to adjust the Foldit score further instead of simply using a linearly-transformed version of the Rosetta energy score?

For example, what if the Foldit score were made "path-dependent" by adding an adjusted version of "net change in Rosetta energy score" from each move to the current score, with exponential decay being applied based on move number? The idea is that this would mathematically guarantee that late-game scripts would only provide diminishing returns, and thus encourage people to reset and explore other options without having to set a hard cap on the number of moves.

infjamc's picture
User offline. Last seen 2 days 20 hours ago. Offline
Joined: 02/20/2009
Groups: Contenders
(Side note)

By the way, are people allowed to submit solutions that are likely to be invalid for the intended scientific purpose, but otherwise score above 10,000? (I have one such solution and thought that it could still have value as a "counter-example" of sorts, perhaps for the purpose of making future adjustments to the score function so that it's "less exploitable.")

bkoep's picture
User offline. Last seen 49 min 19 sec ago. Offline
Joined: 11/15/2012
Groups: Foldit Staff

Feel free to submit anything over 10,000 points! One of the questions of this competition/experiment is whether we can set realistic score thresholds for a puzzle. Even if your solution is problematic, that will still be informative for us!

User login
Download links:
  Windows    OSX    Linux  
(10.12 or later)

Are you new to Foldit? Click here.

Are you a student? Click here.

Are you an educator? Click here.
Social Media

Only search
Other Games: Mozak
Recommend Foldit
Top New Users

Developed by: UW Center for Game Science, UW Institute for Protein Design, Northeastern University, Vanderbilt University Meiler Lab, UC Davis
Supported by: DARPA, NSF, NIH, HHMI, Amazon, Microsoft, Adobe, Boehringer Ingelheim, RosettaCommons