ARTIFICIAL NEURAL NETWORKS

Two special sessions on Artificial Neural Networks will be held at the Conference. These sessions, consisting solely of invited talks, have been organized by Eddy Mayoraz(IDIAP, Switzerland).  These sessions are devoted to combinatorial and complexity issues arising in the field of neural networks.

There is an abundant literature studying the ability of neural networks to approximate any arbitrary real valued function. A consequence of these powerful results states for example that a feedforward neural network with

can approximate any arbitrary real valued function defined on a compact set of the Euclidian space.

Lately, some papers were also devoted to the relationship between the number of hidden units and the quality of the approximation.

A similar topic consists of studying the ability of feedforward neural networks with threshold units (heavyside functions instead of sigmoidal functions) to recognize a subset of the Euclidian space, or equivalently, to compute exactly its characteristic function. If depth-3 networks can recognize any arbitrary region of the Euclidian space, the set of regions recognizable by a depth-2 network is not well understood and its characterization generated a few interested papers in the last 5 years.

The aim of the first session is to present the state of the art in this field with a geometrical and combinatorial flavor. The first two papers are devoted to the characterization of the regions that can be recognized by a depth-2 feedforward neural network with threshold gates. The last paper addresses the computational complexity of recognizing such regions.

Artificial neural networks, and in particular feedforward networks, are effectively used nowadays to solve real life applications. However, among the predominant criticisms, the slowness of the training process is always mentioned. To cope with this drawback, numerous improvements of the standard training algorithms have been proposed and alternative architectures have been developed. To have a deep understanding of the potentialities and limits of a new computational model, a study of the computational complexity of its parametrization for solve a particular problem is essential.

This second session starts with two talks on computational complexity issues arising in two different and particular architectures of neural networks. It ends with the presentation of third type of neural networks, known as mixture of experts or gated neural network, and its application to an industrial problem is described.