It is proven that neural networks with rational weights has the computational po
ID: 649108 • Letter: I
Question
It is proven that neural networks with rational weights has the computational power of the Universal Turing Machine Turing computability with Neural Nets. From what I get, it seems that using real-valued weights yields even more computational power, though I'm not certain of this one.
However, is there any correlation between the computational power of a neural net and its activation function? For example, if the activation function compares the input against a limit of a Specker sequence (something you can't do with a regular Turing machine, right?), does this make the neural net computationally "stronger"? Could someone point me to a reference in this direction?
Explanation / Answer
Just a note:
rational-weighted recurrent NNs having boolean activation functions (simple thresholds) are equivalent to finite state automata (Minsky, "Computation: finite and infinite machines", 1967);
rational-weighted recurrent NNs having linear sigmoid activation functions are equivalent to Turing Machines (Siegelmann and Sontag, "On the computational power of neural nets", 1995);
real-weighted recurrent NNs having linear sigmoid activation functions are more powerful than Turing Machines (Siegelmann and Sontag, "Analog computation via neural networks", 1993);
but ...
real-weighted recurrent NNs with Gaussian noise on the outputs cannot recognize arbitrary regular languages (Maass and Sontag, "Analog Neural Nets with Gaussian or Other Common NoiseDistributions Cannot Recognize Arbitrary Regular Languages" , 1995);
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.