*Popis:* | In the area of radial basis neural networks (RBF networks), various bump functions are used to represent network’s computational units such as Gaussians or inverse multiquadrics. The universal approximation property (UAP property) for RBF networks guarantees the existence of a network (setting of its parameters) that approximates arbitrarily well any function from a given class e.g., from the class of continuous functions on a compact domain. This well-known result (Park, Sandberg – Neural Comp. 1991) is based on the assumption that the widths of computational units can be arbitrarily shrinked. If so, then almost all reasonable bump functions can be used to make an RBF network to exhibit the UAP property. However, if the unit widths are fixed, as it is the case for the convolution kernel neural networks, the situation gets more complex. It turns out that if one wants to preserve the UAP property, it is necessary to examine the behavior of the multi-dimensional Fourier transform of the computational units; and there are bump functions that are no longer suitable for building a neural network. In the lecture, we present in more details the theorem that leads to the employment of the Fourier transform and concrete examples of its computation for commonly used bump functions. |