Back to Recipes Homepage
recipe picture
Recipe: ST - QuickStabilize ( w/ MVANN )
Created by Seagat2011 75 1437
Your rating: None


Name: ST - QuickStabilize ( w/ MVANN )
ID: 100593
Created on: Sat, 01/17/2015 - 06:58
Updated on: Sat, 01/17/2015 - 14:58

Use after a pull or a tug (after a pose-altering operation). Saveslot two (2) is used. Press Ctrl+N to restore best. This recipe incorporates an Anthropic Dreams experimental Majority Vote Adaptive Neural Network. Recipe is not compatible with Exploratory Puzzles. Note: If the recipe does not detect a fruitful stabilization technique, it wil abort the round back to the user. (Some users may not be accustomed to this.)

Best For


Joined: 08/24/2010

Above is an idealized architecture of the MVANN.
It may not appear obvious, but each characteristic function (B-F) is incorporated into a stabilizer ( wa,ws,sh etc. ),
and so doesn't really need to be shown. But they are shown explicitly for clarity and to demonstrate
that they in fact determine the 'character' of a stabilizer. A stabilizer's character is what remains when its largest possible modularity, or mod N
is found and (set-wise logically) subtracted from it. The residue that remains is called its prime, it is mod N irreducible, and thus becomes the character of the stabilizer.
Each scorepart above is mapped into a characteristic function which attempts to modularly describe it, and capture this attribute into a set, or class:

The issue then becomes which characteristic functions accurately describe and maps to a class. This can be accomplished without trial-and-err.
(See my proof of P=NP using Merlin/Arthur Ontology matching)

QueryKey = B* + C + D + E + F

PrimaryKey = F3 = A (qstabTARGET) = hardlim(a2) ((recurrent: feedback)

length(PrimaryKey) = length(QueryKey) = length(B* + C + D + E + F) = 2,240 bits

* = optional

F1 = purelin(a1)
F2 = hardlim(a2)
F3 = satlins(a3)
F4 = purelin(a4)
F5 = purelin(a5)
F6 = compet(a6)

F1:Wt = Unit Matrix (I)
F2:Wt = PrimaryKey = F3 = satlins(a3) (recurrent: feedback)
F3:Wt = F2 = hardlim(a2)
F4:Wt = PrimaryKey = F3 = satlins(a3) (recurrent: feedforward)
F5:Wt = PrimaryKey = F3 = satlins(a3) (recurrent: feedforward)
F6:Wt = F5 = purelin(a5)

R1 = QueryKey
R2 = QueryKey
R3 = QueryKey
R4 = QueryKey
R5 = PrimaryKey = F3 = satlins(a3) (recurrent: feedforward)
R6 = F4 = purelin(a4)

Each combinatoric stabilizer is informally described using a 2240-2-1-1 neural network, shown above. (see transfer functions F1-F6)
The characteristic functions (A-F) are not shown, but they in fact feed into the inputs of F1, F2, F3, and F4 (as R1, R2, R3, and R4, respectively).

Neuron F1 performs a basic load-register operation and is essentially a standby neuron. It is not used.

F2 is a masking neuron. It takes as input a QueryKey and then masks it's weight F2:Wt against it, surpressing to zero (0) any non matching bit fields.
This is done by our choice of F2's hardlim transfer function, which sends any non matching (-1 valued) fields to 0.
The resultant weight is then fed forward into F3:Wt. (as a PrimaryKey)
A unique attribute of F2 is it receives its returning operational weight vector from F3, in the form of a returning PrimaryKey, via recurrent feedback.

F3 is most critical as a masking neuron; it masks (or suppresses to 0) any
non-matching bits between the weight and the input pattern, in essence generating a wildcard of
dont-cares between the two fields, while multiplying back in any incidental bit patterns.

Neuron F4 calculates a compact QueryKey value (or dot product) by summing itself against its known PrimaryKey, distilled in F4:Wt.

F5 behaves a little differently: it calculates an ideal compact PrimaryKey (as if its a QueryKey), by summing each column vector with itself, generating the Unit Matrix (I).

So why do this? The idea is to gather a ratio of correctness. By later dividing the actual key by the ideal key, it
makes for easier managing and evaulation of the output because the output is guaranteed not to be larger than 1 (ie 1.00 = 100% = match).
Hence the choice of F4,F5 purelin transfer functions -- because we want the outputs to be completely linear (untouched).
I emlpoy this method rather than analysis of magnitudes: because although more entries means potentially more matches,
and a greater number of hits, it may not be a true indicator of correctness. (See my pruning algorithm used for PrimaryKeys in recipe)
Also, dividing the actual key by the ideal
also puts the forward layer output (F6) into saturation, which makes for easier managing and evaluation of the output because the result
is guaranteed not to be larger than 1 (1.00 = 100%).
All outputs then sum to an external final competitive neurode (F6) which, among all available stabilizers, chooses a winner. (choosing the stabilizer whose output is closest to 1.00)
In the event of a tie, the first matching input wins.
The winner is then fed back into the combinatoric model into one of the mapping functions (characteristic function F:qstabHISTORY; which is fed into R1,R2,R3 and R4, remember)
So what is the combined purpose of F1,F2,F3 neurons?
F1,F2,F3, are training-neurons used to generate (and train the stabilizer to recognize) PrimaryKeys. Their recurrent layers are re-activated many times during the process.
Whereas neurons F4,F5,F6 are evaluation neurons
which score a global QueryKey against a particluar stabilizer's PrimaryKey. During initial training, F3 is initialized as an empty register (its Wt is set to all 1's),
and it quickly proceeds to perform masking upon receiving varying non matching inputs during its later training and evaluation stages.

This is the essence and operation of MVANN.

The next generation MVANN will be modeled internally as the following,

.. as a cascade of 2,240, 1-bit neural networks. Which I've termed the ST12-DARPA-68 model.
Named after the 1969 Bell Labs Networks, which used inverting amplifiers to implement switchable Neural Networks.[Hag95]
Training and evaluation will be performed within a completely parallel architecture, for complete modularity and scalability.
Inputs will be fed forward at all stages, for complete quantum reversability.
The governing equation (F) for each neuron will be a lookup table or truth table, as opposed to a standard transfer function -- which,
when not continually differentiable, can not easily be derived by a machine.
Only Z and ~Z need be assigned within the truth table for it to map correctly, but all elements of the truth table will be machine-reassignable.
Any neuron weights can be pre-loaded by observing a Z-parameter.
The dot product, when needed, is calculated outside of the network, allowing for further off-loading and optimization.
if a weight bias is needed, one may only need modify the neuron's truth table.

Here are my current areas of research

What is directed entropy ?
How is directed-entropy effected by black holes ?
What is the worst-case lower-bound on the amount time required to evaluate any addition operation ?
What is the worst-case lower-bound on the amount time required to evaluate any DNF boolean operation ?
Are these lower bounds correlatable ?
How are proteins comparable to neural networks ?
How are black holes comparable to neural networks ? (are event-horizons == NeuralNet-summing nodes?)
How are neural networks comparable to ALUs ? (Are Wts == registers ?)
How are ALUs like function-generators ?
Are there parallel approximations available for all linear combinations of neural networks ?



[Hag95] Martin T. Hagan, Howard B. Demuth, Mark Beale, "Neural Network Design", pp 18.2-18.3, 1995

Many of the breakthroughs in the area of Neural Networks came about after the 1960's as a result of Dr. Hopfield, The Hopfield Model, and his networks.
John Hopfield was a Cal. Tech. Physicist whom addressed the implementation issues involved
with VLSI chip design for content-addressable memories, as well as optical implementations.

Joined: 09/24/2012
Groups: Go Science
are you sure it works?

I tried and it finished very soon without results. Are you sure the script has no bug?
it seems powerfull and i would like to be sure the script is ok.

Joined: 08/24/2010
Recipe is..

Recipe is a tool hopefully for hand-folders, to be used repeatedly as you fold. With this recipe you cannot just run it and walk away.

Joined: 08/24/2010


Want to try?
Add to Cookbook!
To download recipes to your cookbook, you need to have the game client running.





Developed by: UW Center for Game Science, UW Institute for Protein Design, Northeastern University, Vanderbilt University Meiler Lab, UC Davis
Supported by: DARPA, NSF, NIH, HHMI, Amazon, Microsoft, Adobe, Boehringer Ingelheim, RosettaCommons