It is proven that neural networks with rational weights has the computational po
ID: 648256 • Letter: I
Question
It is proven that neural networks with rational weights has the computational power of the Universal Turing Machine Turing computability with Neural Nets. From what I get, it seems that using real-valued weights yields even more computational power, though I'm not certain of this one.
However, is there any correlation between the computational power of a neural net and its activation function? For example, if the activation function compares the input against a limit of a Specker sequence (something you can't do with a regular Turing machine, right?), does this make the neural net computationally "stronger"? Could someone point me to a reference in this direction?
Explanation / Answer
I'm going to take the easy solution and say "Yes". Consider an activation function which accepts any inputs and simply returns a constant value (that is, it ignores the inputs). This network always results in a constant output, and thus the computational power (likely by any definition) of this network is zero. It is not capable of calculating anything.
This is enough to show a correlation between the activation function on the the power of the network. It of course does not show, nor disprove, that a network could have more power than a universal turing machine.
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.