Harmonic Analysis and Signal Processing Seminar

Random Vector Functional Link Neural Networks as Universal Approximators

Palina Salanevich
UCLA


Monday, September 23, 2019, 1:30pm, WWH 1314


Abstract


Single layer feedforward neural networks (SLFN) have been widely applied to solve problems such as classification and regression because of their universal approximation capability. At the same time, iterative methods usually used for training SLFN suffer from slow convergence, getting trapped in a local minimum and being sensitivity to the choice of parameters. Random Vector Functional Link Networks (RVFL) is a randomized version of SLFN. In RVFL, the weights from the input layer to hidden layer are selected at random from a suitable domain and kept fixed in the learning stage. This way, only output layers are optimized, which makes learning much easier and cheaper computationally. Igelnik and Pao proved that the RVFL network is a universal approximator for a continuous function on a bounded finite dimensional set. In this talk, we provide a non-asymptotic bound on the approximation error, depending on the number of nodes in the hidden layer, and discuss an extension of the Igelnik and Pao result to the case when data is assumed to lie on a lower dimensional manifold.