Move to OpenCL - a crazy idea

Case number:699969-992230
Topic:General
Opened by:Rav3n_pl
Status:Open
Type:Suggestion
Opened on:Monday, March 26, 2012 - 18:36
Last modified:Monday, April 2, 2012 - 20:37

As much I read about OpenCL entire idea of programming in it is to use "machines". Machine is describing behavior and variables of process, function, class etc.
Maybe if we describe behavior of every atom (how much space it need, in what angles it can be connected to other atoms and rest of stuff I`m not aware of) we can create as many atom-machines as many is them in our puzzle. Then when we use any of tool that change position of atoms and forces between machine will automagically and in parallel way calculate rest of parameters and current energy/score. Then another subroutine using data form current state of all machines/atoms will display it to GUI and so on.
As far I understand how it is working now entire thingy is calculated score/positions of every atom one by one. OpenCL allow to use all available resources (all CPUs and all GPUs) and run as many calculations as hardware allow in parallel way. Only limitation is to create good description of machines. Because we not have many types of atom and not so many combination of atoms (aka amino acids) creation of atom micro-machines and segment mini-machines and protein machine is possible?
Maybe it IS created now in similar way but C++ is not allowing us to do such "tricks" and moving it to OpenCL is easier than we think? Project is not open source so I can only guess :)
I hope all this have any sense for real programmer ;]

(Mon, 03/26/2012 - 18:36  |  5 comments)


spdenne's picture
User offline. Last seen 40 weeks 3 days ago. Offline
Joined: 10/01/2011
Groups: Void Crushers

I heard a comment from Dr. David Baker towards the end of http://twit.tv/show/futures-in-biotech/92
I can't recall an exact quote, and haven't listened to it again, but he was saying something like Rosetta@Home doesn't use the GPU because it would involve too much going back and forward between the CPU and GPU.

Joined: 05/19/2009
Groups: Contenders

The process seems to be sequential and difficult too parallelize. However, www.GPUGrid.net seems to have solved this issue and I would encourage the foldit team strongly to have a look.

Joined: 06/17/2010

I see parallelization like this: we touch any segment/atom and all surrounding ones need to react somehow. Something like waves on water when you throw a rock into it.

phi16's picture
User offline. Last seen 11 hours 23 min ago. Offline
Joined: 12/23/2008

It was my understanding that there can never be true 'parallelization' on CPU doing computations in a linear fashion. But we could mimic the parallelization.

The problem is this: the computer calculates interactions between atoms, one pair at a time. By the time the interactions between the second pair of atoms is calculated, the first pair has already been moved to a new location. In reality, the second pair would have been calculated at the same time as the first pair (parallelization) and since both atoms would have been acting/reacting continuously and progressively, the result may be very different than our step-by-step computational results.

What if we add to the current calculations, instead of moving the atom to the resultant new spot, leaving it at the original spot and recording the resultant new spot in an array? In that way, the next calculations could also use the same starting information for atom B, C, D etc. When all calculations are complete, then compare data in the array, resolve clashes, etc. Then move all at once. This would be just a little slower than currently. And there might be some new economies of processing to be won since atoms aren't moving as much.

I like your ripples (waves) idea. Since the forces might be proportional to distances, things may work this way.

Joined: 06/17/2010

It CAN be "really" parallel on GPU or multi core CPU.
Atoms "just" need be informed about changes around and respond to new conditions - there need be communication between threads.
Then every "cycle" we have "snapshot" of state that we can display, score and put to undo graph. Cycle length should be adjustable by user (longer on running scripts - no need to see all moves, shorter on manual work to see any change we want).
In my understanding out saves are bit upgraded PDB files. PDB is describing position and connections between atoms.

Sitemap

Developed by: UW Center for Game Science, UW Institute for Protein Design, Northeastern University, Vanderbilt University Meiler Lab, UC Davis
Supported by: DARPA, NSF, NIH, HHMI, Amazon, Microsoft, Adobe, RosettaCommons