New proposal for social structure of foldit, in a more general, abstract form

Case number:699969-993147
Topic:General
Opened by:Ignacio
Status:Open
Type:Suggestion
Opened on:Friday, July 13, 2012 - 08:40
Last modified:Monday, July 16, 2012 - 23:49

I got flamed for the last one, but still... I am writing this to include some changes that were suggested from other people's comments + make the proposal more general.
Ideas:

1) Make all recipes available to all players, no group-specific recipes can be allowed for fair competition among all players. Eliminate group competition. Convert groups into social entities, not competitive entities.

This should allow all players to compete in equal conditions. Unfairness is perhaps the worst aspect of the game today. Foldit should not function as a corporation game, in which groups often behave as private corporations under no monopolistic control. Tools must not be hidden. Groups should be structured as collaborative, not competitive efforts. Groups = learning, sharing information, ideas.

2) Differentiate more clearly between soloist efforts and evolver efforts. Eliminate the possibility of using the information from other players to improve soloist scores. Once you start using other people's solutions, you become an evolver. Allow all players to evolve the best solutions of all players at all times.

As I explained in my previous post, I think independent thinking is not sufficiently encouraged or rewarded with this social structure. Top soloist should be only those that come up with independent solutions, not those that efficiently copy other people. Also, it makes no sense to evolve the best solution of a group when it is irrelevant for the very top score. Again, this is due to the idea that groups must compete, which I think is wrong. On the contrary, it is much better to have all players to work on the very top solutions as evolvers at some point of the puzzle.

3) Create a classification for top scriptwriters and another for top programs based on how many times each program is downloaded.

We should recognize the efforts of all scriptwriters and also make them well known for everybody and make all players (not just people within a group) be able to interact with them, suggesting new ideas.

I made a more detailed proposal about how to precisely implement all this before, but the critical ideas are all here. Of course, the implementation could be quite different from what I suggested, even keeping the same ideas.

Given that it is likely this post is also going to be flamed, I will simply not read the answers. Best to all and I hope this helps.

Ignacio

(Fri, 07/13/2012 - 08:40  |  20 comments)


Joined: 06/17/2010

Flame on!

1. -1
Cooperation inside group, competition between groups/players.
No competitnion - no fun.
There is very small number of "secret" recipes vs shared ones. Most of group recipes are posted in global when all bugs are removed and script is relly doing something good (at leas I do this way, my team is a debug team:)

2. say what? -100
Peeking into screenshoot should make me evolver too?
All-hands puzlles are evil as is described in many places. We need more diversity instead of better score at almost same position (for science).

3. +10
This one was somehow working long time ago, just scoring funcion showing odd things (number of usage was ok, but how many points gained by script was broken).
Scripts need description (if ur a writer ADD one finally!) and categorization.

Joined: 04/15/2012
Groups: Beta Folders

My thoughts exactly. I do like the download number.

Ignacio's picture
User offline. Last seen 1 year 31 weeks ago. Offline
Joined: 05/10/2008
Groups: Go Science

Not flamy enough, so I will answer.

1) Competition should mean competition for the top spots, which are the only relevant ones from a scientific point of view and also the only ones that are real fun to play for. Sadly, >50% of our time (even for the strongest players) is passed either working on our own solutions, which are often largely suboptimal (and thus not only scientifically irrelevant but also boring), or in other people's solutions (group) that are often also suboptimal.

Finally, (let's be optimistic) we get to position 15th in the solo, 2nd in the group, about 100 points behind the best score. Then, we get 65 solo points and we feel we have achieved something. But, in fact, we have achieved nothing, we have just lost our time for nothing other than a position in a table of scores. A bit of ego playing, but no science, no big thrill either. This dynamics makes no sense, either scientifically or as players, and should be avoided as much as possible.

Why should not all of us play for the very top spot in each puzzle???. That would be fun.

2) The number of secret recipes is unkown. I can say that several of the recipes that I have, from the Go Science group-only archive, are much better than anything similar in the public repository. Several others are competitive with the best and quite different from what is public, so they provide also an advantage. And still I have had no time to check in detail all what is available, so I think I may be missing some good ones (!). I assume that other groups have some exceptional tools too, otherwise we would be winning all puzzles. I have actually noticed that Go Science as a whole is not competitive in particular types of puzzles, and the simplest hypothesis is that our tools are not good enough, respect to those available in other groups.

This is a real problem. Closed groups with hidden recipes are hampering the progress of the whole community.

3) Screenshots should not be allowed for soloists, only for evolvers!. Especially at the beginning, maximal independence among soloists means maximal variation among solutions, which is in principle best.

However, remember that, according to my proposal, all players can move to use any of the top solutions at all times, so if you think you must peek somebody else work or you are tired of failing short in a particular puzzle, you just become an evolver and keep going. No harm done, except you are not making believe you are a soloist when you are in fact refining other people's work.

4) Description and categorization of scripts are a must, I agree.

In summary, my proposal is based on the idea that independent, creative top soloists, together with top scripts, are the foundations for our success as a community. After that, a strong team of evolvers (which could be all players together) may greatly help to refine the soloist work and improve the final results. Competition should be based mostly on ideas and skills. Now, competition is significantly biased by the share of privileged information within groups and by having private programs. That is, I think, absurd.

Best

Ignacio

Joined: 04/19/2009

You make one erroneous assumption about "the top spots, which are the only relevant ones from a scientific point of view".

This is not the case. Foldit's greatest sucess (the monkey HIV) was an evo that was not the number one solution for that particular puzzle - and it wasn't accomplished by scripts, it was late game handwork that nailed it. Good thing that all scripts weren't dumped into global that week... perhaps everyone would have been having too much fun trying them out to get top rank instead of getting it right.

For CASP, while some submissions are top ranked, here are Foldit's submissions for T0722 (puzzle 585 & 589):

585 - rank 6 - petr2 - no group
585 - rank 7 evo - krulon & micheldeweerd & cbwest - foldeRNA (Contenders finished ahead)

589 - rank 15 - smilingone - Beta Folders
589 - rank 33 - auntdeen - Anthropic Dreams
589 - rank 34 - hpaege - no group

Soloists on my team took ranks 1 - 2 - 6 - 7 - 8 - 10, and our high evo was obviously not from my solution in puzzle #589. I only received 32 points and likely brought down my ranking with that finish - but to me it's priceless!

These submissions are not unusual - during CASP, Foldit has used many solutions for submission that were much lower ranked than the "winning" solutions. Please see Madde's excellent work at this page - you can click on any of the puzzles to see the Foldit submissions: http://de.foldit.wikia.com/wiki/CASP10

Our software is far from perfect, and our global top scores are not by a long shot the best gauge of value to science.

Ignacio's picture
User offline. Last seen 1 year 31 weeks ago. Offline
Joined: 05/10/2008
Groups: Go Science

I think you are largely right in all what you say, but it does not alter my point of view.

Of course, in CASP you can submit several solutions, so the ones submitted must be the ones that are 1) very good according to standard evaluations, energy etc and 2) the best for a given fold model. It makes no sense to send the same solution 5 times with minimal variations. You must choose as submissions several alternative folds (especially if you get some that are very different) provided that solutions have a reasonable quality, even if some of them are evaluated as quite worse than others. I think that is the main reason for choosing suboptimal solutions.

In any case, I checked (I knew the page, but not that the submissions for some CASP puzzles were already there, thanks for that) and almost always the very top solution is submitted (7/10 in the first 10 cases).
No clue why there are cases in which it is not submitted, but it happens mainly when the same protein has been used in several puzzles, so perhaps the solutions in one puzzle were much better than in the other(s) and those top solutions are the ones that are not included. By the way, I can't evaluate this point well, but you seem to be able to do it. I don't see the CASP submissions for 585 and 589 separatedly, where are those?. You were 7th in 585, and still you say it is the solution in 589 which is submitted. How do you know that?. This would allow me to see more clearly the true level of correlation.

I did not systematically check whether the solutions submitted were always/almost always among the top ones, but in general they are. Otherwise what we do would not make sense. Also, it must be exceptional that a very poor scorer has 1) a peculiar fold (otherwise you would choose a better scorer) and 2) a good energy score.

In any case, community dynamics does not change with all this. My proposal should INCREASE fold diversity by forcing people to work independently. What is decreasing diversity is people massively copying other people solutions, as happens now. And also, I think the only way for us to play is to try to obtain the highest scores for each particular model, generated by soloists as independently as possible, with evolvers concentrating on improving those top solutions. I suggested making available to all people for evolving the top 10 solutions, but this could be increased to 15, 20 or more, if it is better.

It is true that this may not work sometimes, but still we should concentrate on that, because it is the only rational thing that we can do. The fact that sometimes we can get it right by getting it all wrong does not changes the fact that our only way to evaluate what we are doing is by trying to generate top scores and increase the very top scores. Now, I don't think the current social structure is doing that. On the contrary, I think is making people lazy in generating new structures, making many people unable to efficiently improve those structures (due to the lack of the hidden tools that other people have) and inefficient in improving the best structures generated, most often simply because they cannot work on them. I am tired of looking at scores that are 400 points higher than mine and 200 points higher than my group's while I keep twisting my backbone (protein-wise and personally). I doubt the likelihood of my personal solution being a black swann compensates for the points that all top solutions are losing by me not being able to work on them. And the new structures that occasionally I could develop from them. And not only the very top ones, but all those strong competitors that I could SEE have potential,... only I cannot see them at all. Then, multiply that by 25-50 people with about the same skills. What a waste.

In any case, many thanks for the post.

I

tokens's picture
User offline. Last seen 15 weeks 4 days ago. Offline
Joined: 11/28/2011

Elaborating on Auntdeen's comments, that diversity, not top score, is the most important:

You said in you opening statement: "Also, it makes no sense to evolve the best solution of a group when it is irrelevant for the very top score. Again, this is due to the idea that groups must compete, which I think is wrong. On the contrary, it is much better to have all players to work on the very top solutions as evolvers at some point of the puzzle."

I think the opposite is true. It's best to have as many solutions evolved as possible (at least if that solution is in the, say, top 30). Unfortunately the evolver scoring system doesn't encourage people to evolve anything other than the very top scoring solution. This is where groups help, by making people evolve a larger number of solutions. Letting everyone evolve all solution will decrease diversity.

That said, I still somehow like you idea of letting everyone evolve everyone's solution. But it would require a different scoring system for evolvers than the current one to make it work in practice.

Ignacio's picture
User offline. Last seen 1 year 31 weeks ago. Offline
Joined: 05/10/2008
Groups: Go Science

Auntdeen and you made convincing statements, and I agree. I stressed too much the importance of the "top score" in my post, and your points are correct. However, I also suggested the top 10 solutions from the soloists to be made available for everybody. If soloists work fully independently, as I suggested, it is likely that al these solutions are quite different. However, after reading all that you too said, I think 20 or 30 could be even better.

I did not think about the evolving score, but you are right. A simple option would be to give the evolvers as many points as the difference between the score when they first get the solution and the score when they end. On one hand this seems quite strange, given that it would mostly benefit those evolvers that dramatically improve bad solutions. However, a nice idea also emerges: now evolvers work is very technical, being concentrated on improving a bit the best solution of a group. With the other score, evolvers would be interested in improving as much as possible any solution, no matter how bizarre. Some of the solutions could perhaps make it to the top scores by that divergent, creative work, even if the soloists that first built them failed.

So the problem is just avoiding people start evolving solutions with a -900000 value and winning the competition with a single wiggle. Although this would be in general difficult if only the top 20-30 scores or so can be evolved, it still could occur, especially at the beginning of a puzzle. The simplest solution probably is allowing evolvers to start only some hours after the beginning of the puzzle, so the top soloist scores are already quite good. After all, my proposal goes in the direction of favoring that all people tries first some soloist work and later, if that fails or become tired, shifts to evolving. This could be a way to favor that.

A further refinement derives from another of my suggestions, which was to make all people become evolvers some hours before the end of the puzzle. An additional possibility would be to have a first classification for "early evolvers" for those that improve solutions before that point and a second classification for "last-minute evolvers" in which everybody would be working with the already highly evolved solutions for further improvement. Again, in both cases, the value would be the difference between the scores before and after the evolver works with the solution. Still, now I think that, if this two classifications existed, my idea of forcing everybody to become evolver becomes just too rigid, it is totally unnecessary. It would be simpler to open the "last-minute evolver" competition and leave people choose between continuing soloist work and starting last minute evolver work in those say last 12 or 24 hours of the puzzle.

I am certain I can come up with some other ideas for the evolver scores tomorrow, but now it is time for me to sleep. Thanks for your interesting feedback.

Ignacio

Joined: 04/19/2009

Sorry, your summation of CASP submissions is incorrect: "it must be exceptional that a very poor scorer has 1) a peculiar fold (otherwise you would choose a better scorer) and 2) a good energy score".

I am extremely familiar with the diversity scores from my team, which is a large sampling of foldit in general, because to date I have done 175 CASP roll & CASP 10 individual submissions for my team. The Contenders, Void Crushers and AD took beta_helix up on his offer in this post back in the winter: http://fold.it/portal/node/991099. Each of the participating teams is sent only their team's scores, and runs them through Pymol to see the diversity (Susume is the wonderful volunteer in our group who does this). Beta_helix sends to the 3 teams his picks for each CASP target - he has first choice, of course - so that we do not duplicate any of his submissions.

So yes, I know which puzzles Foldit chooses for submissions (and can look up ranks), and I know for my own team. And I can tell you that your assumptions are completely wrong. It is a very rare puzzle, in fact, that has submissions from Foldit that are all from the top ranks.

What Foldit needs for scientific purposes is diversity. That's why many were upset when the software was not rewarding handbuilding for a period of time - that's where the diversity comes from, not scripts.

In fact, Foldit at the moment is benefiting greatly from having the team structure that it has - instead of 5 submissions per CASP target, we have 20 - and more diversity that way in the submissions.

Your perception is yours - I see it completely differently. I'd rather focus instead on the teaching of handwork with better tools than we now have - webinars, a place to post HD videos and some standard free software to make videos that we could make use of for teaching. I'd rather the devs be able to come up with something that rewards diversity (and yes, it's my understanding that they are working on that).

There is an old post in the forum ( http://fold.it/portal/node/266290 ) that is a lively discussion back in 2008 about groups and scoring. In that thread, Zoran had this to say: "our goal is both to provide the engaging game play experience, and to eventually enable everyone to advance science. Groups serve a purpose for both goals. We suspect that there a number of skills involved in the protein folding process, and groups would enable us to discover much better solutions by teaming up folks that are very good at finding initial good scores, and folks that have a knack for improving such solutions further. This is why the groups are important."

...And because Susume and I look at all of our team's diversity scores for each CASP puzzle, and compare the pictures in Pymol, I can also tell you that you have another incorrect assumption "What is decreasing diversity is people massively copying other people solutions, as happens now"... What happens in our group, which is all that I can speak to (but suspect it is the same for the other top teams), is that it is extremely unusual for us to see that someone in our group has loaded a high solution and and used it as a guide.

One last note - you have commented here that you were "stunned by the good ideas developed by your group when I saw them for the first time. It was not the recipes, but the IDEAS that were novel when compared with what it was available". Two things, imho - all the top groups have novel ideas... but they are born from the easy give and take of a close knit chat (group) room. Someone tries something and says - hey, I just tried something here, can this be scripted? Someone else chimes in to say well, that could be limiting, why not do it this way instead? Another person adds an enhancement...

How would you expect that easy flow of ideas for scriptwriters to draw upon without groups?

Ignacio's picture
User offline. Last seen 1 year 31 weeks ago. Offline
Joined: 05/10/2008
Groups: Go Science

Thanks a lot for your post. My perception has indeed changed. I still think I am right in several points. One, make soloists work alone, make their evaluations truly independent, precisely to encourage diversity. Collaboration among players should start in a different, evolver phase. This indeed would be a deeep change in the structure that we have now. Two, make all tools open, to allow all players to compete in equal conditions. Another significant social change. Three, encourage scriptwriters by giving them a formal evaluation.

But, with your information, I think I was mistaken when I proposed eliminating or otherwise modyfing the structure of groups. I see now groups may encourage diversity more than they damage it. If indeed groups get totally different significant solutions very often (something that I did not think it was happening), then they are clearly another good way to increase diversity.

Additional ideas: good scriptwriters should be encouraged to collaborate with people in different groups to get more ideas, more programs, and also more points in their ranking. Once you eliminate group-only recipes, that should be easy, because scriptwriters would work for the whole community.

Another idea is that perhaps is bad to save only the top solutons for each player. It is not unusual to find two very different structures with quite similar scores. A typical case is when locating an external helix in two different places gives you almost identical scores. For CASP especially, would it be possible to encourage players to send a second solution (not to be awarded points) if it is within a given range of points of their first one and the structures are truly different?.

A final idea is that the current evaluation tools could probably be improved by giving additional value to the players that come up with peculiar structures, give points for creating new folds. I don't know how feasible would be that.

In a final, more personal aside, if the score system is so different from the true significance of a solution as to be statistically equally good to be first or 12th, I think playing becomes quite less attractive. Especially, working hard to "perfect" a solution, taking hours to increase the last few points, barely makes sense. Also, getting a score as evolver by improving a solution a couple of points may be perfectly absurd.

Thanks again

Ignacio

infjamc's picture
User offline. Last seen 2 years 37 weeks ago. Offline
Joined: 02/20/2009
Groups: Contenders

Getting a score as evolver by improving a solution a couple of points may be perfectly absurd.

For this issue, a simple fix would be handing out evolver credit not by the top score, but by the number of points gained. To prevent people from abusing the system by uploading the starting configuration and letting a teammate wiggle it, the scores can be weighted. For example, the score gain must occur above the median score would count, and a higher weight could be applied for the score gains closer to the top score (a quadratic function would suffice).

infjamc's picture
User offline. Last seen 2 years 37 weeks ago. Offline
Joined: 02/20/2009
Groups: Contenders

Oops, there was a typo. The start of the third sentence should read "the score gain must occur above the median score for it to be counted..."

Anyway, to clarify the idea, here's an example. Suppose that the median score is 10000, the top score is 12000, and the function is (end_score_above_median^2 - start_score_above_median^2)/40000.

Evolving from 10000 to 12000: 100 points
Evolving from 9000 to 11000: 25 points (remember, only the portion above 10000 would count)
Evolving from 11000 to 12000: 75 points
Evolving from 11990 to 12000: 0.9975 points
Evolving from 11900 to 12000: 9.75 points

marie_s's picture
User offline. Last seen 1 year 41 weeks ago. Offline
Joined: 05/18/2008
Groups: None

They are other ways to participate to the social life of foldit :
- sharing pictures at the end of puzzles,
- describe your strategies in the wiki,
- helping new ones in the chat,
these ways give ideas without leading everybody on the same paths.

We dont need:
- to have less freedom : let players share what they want to share,
- to have all the same strategies,
- to have all the same way to participate
- to be on the top in all types of puzzles to be useful but just to try to have on one that, by luck or talent, you may have the good useful idea.

Ignacio's picture
User offline. Last seen 1 year 31 weeks ago. Offline
Joined: 05/10/2008
Groups: Go Science

Answering your criticisms:

We dont need:
- to have less freedom : let players share what they want to share,

Freedom is a typical dialectical weapon: If somebody wants to make me less free, he is naturally mistaken.

Actually, I am one of the players with more freedom in Foldit. Becuase most players lose freedom when they share information and use knowledge from other people's solutions, and I don't. I fully use my freedom to explore my own ideas. Where is your freedom if somebody directs, modifies or constraints your efforts by giving you information?. Keep your ideas free from contaminations!. My suggestion is to make soloists totally independent, i. e. truly free.

Also, according to your idea that freedom = sharing, we would have more freedom if we could share all solutions with all people. Put them in a single repository. Why we don't do that?. Why not a single group where we all roam free?. Why I cannot be in many groups, sharing all their ideas at the same time?. Why the people that thought Foldit does not free us completely?. Because it makes no sense. They want us substantially independent and competing. They constrain us. And they are totally right. I just proposed alternative constraints.

- to have all the same strategies,

I said the same tools, recipes. Strategies are a different thing altogether. Different players play very differently, even with the same tools.

- to have all the same way to participate

Now we all have the same way to participate, I don't understand this, sorry.

- to be on the top in all types of puzzles to be useful but just to try to have on one that, by luck or talent, you may have the good useful idea.

Here I use statistics: if you significantly contribute to many puzzles, it is more likely that you have an useful idea. My proposal was thought to make people able to participate more substantially in the critical moments of the game: development of new ideas and evolution of those ideas.

Thanks for the post.

I

marie_s's picture
User offline. Last seen 1 year 41 weeks ago. Offline
Joined: 05/18/2008
Groups: None

I think , people have the right to have secret strategies, recipes and so on, to be in a team or not, to share solution in the middle or the game of not, to compete by teams... an it is freedom.
All your propositions are resctrictions of freedom.
Like many limitation of freedom, they are difficult to implement, you think to have a police force to restrict what a player say to another player in the soloist part?

Sharing our solution at each step of a puzzle with everybody is not efficient for the purpose of the game and only because of this, we are ask not to do it.

we have not the same way to participate :
- many players are not in a team like me but I dont see why
- some play all puzzles, some choose some puzzles to play
- some like design puzzles and do them first,
- some mainly evolve only and dont care about solist score,
- some share their results, many dont
- some spoke, some dont,
- some ask advice, some dont.

I think I am better at design and denovo puzzles that on refinment, and I have statistics to proove it, so the most efficient way I have to participate is to do only design puzzles. I dont, because, you know, it is a game not a work so I play the puzzles I like the way i like even if the score show that I am in a bad road.

I dont think you have the knowledge on how I and many other folders play this game.

tokens's picture
User offline. Last seen 15 weeks 4 days ago. Offline
Joined: 11/28/2011

There are a couple of things which you are suggesting which I don't believe is true:

1: "The number of secret recipes is unkown. I can say that several of the recipes that I have, from the Go Science group-only archive, are much better than anything similar in the public repository. Several others are competitive with the best and quite different from what is public, so they provide also an advantage. And still I have had no time to check in detail all what is available, so I think I may be missing some good ones (!). I assume that other groups have some exceptional tools too, otherwise we would be winning all puzzles."

I'm in Anthropic Dreams, and I almost exclusively use public recipes. "Magic" recipes is not what gains you a high score, hard manual work and a good knowledge of the public recipes is more important.

2. "What is decreasing diversity is people massively copying other people solutions, as happens now."

I don't believe this is true. I have gained some of my highest ranks without looking at other peoples work: Choosing the template myself and rebuilding the way I believe is right. Clearly if you gain rank 1 in a puzzle, it's not because you copied someone else.

That said, I have gained valuable information studying other peoples solutions. As other people also have suggested, it would be great if all solutions were available for public inspection after a puzzle closes.

Ignacio's picture
User offline. Last seen 1 year 31 weeks ago. Offline
Joined: 05/10/2008
Groups: Go Science

This is a nice feedback. Thanks a lot.

There are no magic recipes, I agree. But still recipes are not the same for everybody. You say you "almost" always use public recipes. I do too. Only that, as you, not always. Sometimes some of our own recipes work better than what is in the public repository. You don't have those and I don't have yours and we both are working below our true abilities. That is my point.

Also trust me if I tell you that AD had a strong lead in recipes about a year ago (I am almost tempted to destrust your humility). I was stunned by the good ideas developed by your group when I saw them for the first time. It was not the recipes, but the IDEAS that were novel when compared with what it was available. And they work much better than anything public, to the point that most recipes today (what I have seen at least) are just refinements of those ideas. It is a matter of time until some group or another gets the same kind of advantage. It may actually be happening right now, only that we don't know. But there is quite a difference in group performances in different types of puzzles. E. g. for some reason Go science is quite bad in design and symmetry puzzles and stronger in the rest. Why?. I have some hypotheses.

You also agree with me in the basic idea that copying is decreasing diversity when you say that you often did best when you did not look at other people solutions. Of course you are right. That is my point also. Make all people work like that and you will see, in average, more and better ideas generated. However, this is not what most people do now.

So I think we fully agree in everything. Best,

I

tokens's picture
User offline. Last seen 15 weeks 4 days ago. Offline
Joined: 11/28/2011

Regarding design puzzles:

I don't think AD is good in design puzzles because we literally copy each other. However I myself have generated some good designs for the freestyle design puzzles, which other people in my group have learned from and made similar solutions. Conversely I have learned about which approaches worked by looking at other peoples solutions from my group. This is not copying, this is learning from other peoples experiences, and I would love if we could share the information created in our group with everyone in foldit. Again this points to the idea of letting everyone explore the top solutions after a puzzle has been closed.

My main point is this: Being in a group like AD gives an advantage because you learn from other peoples experiences, not because you copy the best scoring solution.

infjamc's picture
User offline. Last seen 2 years 37 weeks ago. Offline
Joined: 02/20/2009
Groups: Contenders

Re: Ignacio

Overall, I have to agree to disagree with your suggestions for the reasons that others have already stated above. But just to play devil's advocate, here are a few ideas that I see as a possible compromise:

1. Instead of banning non-public recipes altogether, why not implement a small penalty for running them (say, it would cost you 1 point per 100 lines of code executed from a private script)?

2. Instead of scrapping groups for competitive purposes, here's another idea:

  • Allow people to share solutions not only with yourself or your group, but also with everyone;
  • Revamp the scoring system to incentivize sharing solutions with the rest of the community. For example, suppose that Player A uploads a solution to the public repository and Player B manages to evolve it, both A and B will receive a bonus to their score.

3. There's another way to discourage "efficiently copy[ing] other people" or "evolv[ing] the best solution of a group": Awarding points only to the top X players/teams on certain puzzles. That way, one cannot simply count on copying a teammate's solution for a score that's "high enough." Competition would also be fiercer, which might encourage people to experiment with radical explorations if they know that the only way they can get on the scoreboard is via a "Hail Mary rebuild"

infjamc's picture
User offline. Last seen 2 years 37 weeks ago. Offline
Joined: 02/20/2009
Groups: Contenders

***DISCLAIMER***

To be absolutely clear, personally I would NOT like to see #1 or #3 actually being implemented because the cure might be worse than the disease. (For example, #3 is especially bad because it could have the side effect of causing people to stop playing if they feel that they cannot be competitive enough to earn points.) Again, I'm just brainstorming out loud for the sake of discussion.

spmm's picture
User offline. Last seen 32 weeks 5 days ago. Offline
Joined: 08/05/2010
Groups: Void Crushers

These concerns come up frequently and I confess I am unable to think of real world competitive situations outside of examinations and the sharp end of military research, where the workings of a 'win' are not openly available. In sports, however much secret training and special dieting goes on, the win and how you achieved it is open and public and analysed again and again on video. In science the win requires publication, extensive peer validation of the processes and 'winners' are then often protected by patents.

In competition between firms, who are competing openly, concepts like first to market, economies of scale and disruption confer advantage and again patents protect proprietary portions of the business until disruptive forces change the landscape.

In foldit there is no way of examining a win and what was done to make it happen, neither the winning results or the method of working is available, players don't get to see and interact with the top solutions or get a synopsis of process, ie this result was achieved by running x,y,z scripts for four days with 1 hour of handfolding, or whatever.

So it would seem that in foldit there is a corporate level of competition with the groups as firms, competing with each other and some excellent solo folders who also win frequently.

Regardless of how the competion is arranged imo it is the lack of after game analysis which is the concern of many people, so foldit appears secretive, with special tools available to only a few and with no way to learn from the workings of others which would happen in the real world. This can be frustrating to many people.
To preempt the 'this is a game not the real world' comments, note that players are assured that foldit has meaning in the real world, and is more than just a game which exists in its own universe for entertainment only.

Sitemap

Developed by: UW Center for Game Science, UW Institute for Protein Design, Northeastern University, Vanderbilt University Meiler Lab, UC Davis
Supported by: DARPA, NSF, NIH, HHMI, Amazon, Microsoft, Adobe, RosettaCommons