QT:{{”
In the mathematical theory of artificial neural networks, universal approximation theorems are theorems[1][2] of the following form: Given a family of neural networks, for each function f
{\displaystyle f} from a certain function space, there exists a sequence of neural networks….That is, the family of neural networks is dense in the function space.
The most popular version states that feedforward networks with non-polynomial activation functions are dense in the space of continuous functions between two Euclidean spaces, with respect to the compact convergence topology.
“}}
https://en.wikipedia.org/wiki/Universal_approximation_theorem