Back to Recipes HomepageProfile
Best ForComments
Thu, 01/22/2015  16:58
#2
are you sure it works?
I tried and it finished very soon without results. Are you sure the script has no bug?
Sun, 02/01/2015  19:08
#3
Recipe is..
Recipe is a tool hopefully for handfolders, to be used repeatedly as you fold. With this recipe you cannot just run it and walk away.
Sun, 02/08/2015  19:11
#4
Hi
*handfolding 

Above is an idealized architecture of the MVANN.
It may not appear obvious, but each characteristic function (BF) is incorporated into a stabilizer ( wa,ws,sh etc. ),
and so doesn't really need to be shown. But they are shown explicitly for clarity and to demonstrate
that they in fact determine the 'character' of a stabilizer. A stabilizer's character is what remains when its largest possible modularity, or mod N
is found and (setwise logically) subtracted from it. The residue that remains is called its prime, it is mod N irreducible, and thus becomes the character of the stabilizer.
Each scorepart above is mapped into a characteristic function which attempts to modularly describe it, and capture this attribute into a set, or class:
The issue then becomes which characteristic functions accurately describe and maps to a class. This can be accomplished without trialanderr.
(See my proof of P=NP using Merlin/Arthur Ontology matching)
QueryKey = B* + C + D + E + F
PrimaryKey = F3 = A (qstabTARGET) = hardlim(a2) ((recurrent: feedback)
length(PrimaryKey) = length(QueryKey) = length(B* + C + D + E + F) = 2,240 bits
* = optional
NEURON OUTPUTS
F1 = purelin(a1)
F2 = hardlim(a2)
F3 = satlins(a3)
F4 = purelin(a4)
F5 = purelin(a5)
F6 = compet(a6)
NEURON WEIGHTS
F1:Wt = Unit Matrix (I)
F2:Wt = PrimaryKey = F3 = satlins(a3) (recurrent: feedback)
F3:Wt = F2 = hardlim(a2)
F4:Wt = PrimaryKey = F3 = satlins(a3) (recurrent: feedforward)
F5:Wt = PrimaryKey = F3 = satlins(a3) (recurrent: feedforward)
F6:Wt = F5 = purelin(a5)
NEURON INPUTS
R1 = QueryKey
R2 = QueryKey
R3 = QueryKey
R4 = QueryKey
R5 = PrimaryKey = F3 = satlins(a3) (recurrent: feedforward)
R6 = F4 = purelin(a4)
Each combinatoric stabilizer is informally described using a 2240211 neural network, shown above. (see transfer functions F1F6)
The characteristic functions (AF) are not shown, but they in fact feed into the inputs of F1, F2, F3, and F4 (as R1, R2, R3, and R4, respectively).
Neuron F1 performs a basic loadregister operation and is essentially a standby neuron. It is not used.
F2 is a masking neuron. It takes as input a QueryKey and then masks it's weight F2:Wt against it, surpressing to zero (0) any non matching bit fields.
This is done by our choice of F2's hardlim transfer function, which sends any non matching (1 valued) fields to 0.
The resultant weight is then fed forward into F3:Wt. (as a PrimaryKey)
A unique attribute of F2 is it receives its returning operational weight vector from F3, in the form of a returning PrimaryKey, via recurrent feedback.
F3 is most critical as a masking neuron; it masks (or suppresses to 0) any
nonmatching bits between the weight and the input pattern, in essence generating a wildcard of
dontcares between the two fields, while multiplying back in any incidental bit patterns.
Neuron F4 calculates a compact QueryKey value (or dot product) by summing itself against its known PrimaryKey, distilled in F4:Wt.
F5 behaves a little differently: it calculates an ideal compact PrimaryKey (as if its a QueryKey), by summing each column vector with itself, generating the Unit Matrix (I).
So why do this? The idea is to gather a ratio of correctness. By later dividing the actual key by the ideal key, it
makes for easier managing and evaulation of the output because the output is guaranteed not to be larger than 1 (ie 1.00 = 100% = match).
Hence the choice of F4,F5 purelin transfer functions  because we want the outputs to be completely linear (untouched).
I emlpoy this method rather than analysis of magnitudes: because although more entries means potentially more matches,
and a greater number of hits, it may not be a true indicator of correctness. (See my pruning algorithm used for PrimaryKeys in recipe)
Also, dividing the actual key by the ideal
also puts the forward layer output (F6) into saturation, which makes for easier managing and evaluation of the output because the result
is guaranteed not to be larger than 1 (1.00 = 100%).
All outputs then sum to an external final competitive neurode (F6) which, among all available stabilizers, chooses a winner. (choosing the stabilizer whose output is closest to 1.00)
In the event of a tie, the first matching input wins.
The winner is then fed back into the combinatoric model into one of the mapping functions (characteristic function F:qstabHISTORY; which is fed into R1,R2,R3 and R4, remember)
So what is the combined purpose of F1,F2,F3 neurons?
F1,F2,F3, are trainingneurons used to generate (and train the stabilizer to recognize) PrimaryKeys. Their recurrent layers are reactivated many times during the process.
Whereas neurons F4,F5,F6 are evaluation neurons
which score a global QueryKey against a particluar stabilizer's PrimaryKey. During initial training, F3 is initialized as an empty register (its Wt is set to all 1's),
and it quickly proceeds to perform masking upon receiving varying non matching inputs during its later training and evaluation stages.
This is the essence and operation of MVANN.
The next generation MVANN will be modeled internally as the following,
.. as a cascade of 2,240, 1bit neural networks. Which I've termed the ST12DARPA68 model.
Named after the 1969 Bell Labs Networks, which used inverting amplifiers to implement switchable Neural Networks.[Hag95]
Training and evaluation will be performed within a completely parallel architecture, for complete modularity and scalability.
Inputs will be fed forward at all stages, for complete quantum reversability.
The governing equation (F) for each neuron will be a lookup table or truth table, as opposed to a standard transfer function  which,
when not continually differentiable, can not easily be derived by a machine.
Only Z and ~Z need be assigned within the truth table for it to map correctly, but all elements of the truth table will be machinereassignable.
Any neuron weights can be preloaded by observing a Zparameter.
The dot product, when needed, is calculated outside of the network, allowing for further offloading and optimization.
if a weight bias is needed, one may only need modify the neuron's truth table.
Here are my current areas of research
What is directed entropy ?
How is directedentropy effected by black holes ?
What is the worstcase lowerbound on the amount time required to evaluate any addition operation ?
What is the worstcase lowerbound on the amount time required to evaluate any DNF boolean operation ?
Are these lower bounds correlatable ?
How are proteins comparable to neural networks ?
How are black holes comparable to neural networks ? (are eventhorizons == NeuralNetsumming nodes?)
How are neural networks comparable to ALUs ? (Are Wts == registers ?)
How are ALUs like functiongenerators ?
Are there parallel approximations available for all linear combinations of neural networks ?
Seagat2011
REFERENCES
[Hag95] Martin T. Hagan, Howard B. Demuth, Mark Beale, "Neural Network Design", pp 18.218.3, 1995
Many of the breakthroughs in the area of Neural Networks came about after the 1960's as a result of Dr. Hopfield, The Hopfield Model, and his networks.
John Hopfield was a Cal. Tech. Physicist whom addressed the implementation issues involved
with VLSI chip design for contentaddressable memories, as well as optical implementations.