Graphon Signal Processing, Luana Ruiz

Graphons are infinite dimensional objects that represent the limit of convergent sequences of discrete graphs. This talk discusses a theory of Graphon Signal Processing centered on the notions of graphon Fourier transform and linear shift invariant graphon filters. These two objects are graphon counterparts of graph Fourier transforms and graph filters. It is shown that in convergent sequences of graphs and associated graph signals: (i) The graph Fourier transform converges to the graphon Fourier transform when considering graphon bandlimited signals. (ii) The spectral and vertex responses of graph filters converge to the spectral and vertex responses of graphon filters with the same coefficients. These theorems imply that for graphs that belong to certain families —in the sense that they are part of sequences that converge to a certain graphon— graph Fourier analysis and graph filter design have well defined limits. In turn, these facts extend applicability of graph signal processing to graphs with large number of nodes —because we can transfer designs from limit graphons to finite graphs— and dynamic graphs —because we can transfer designs to different graphs drawn from the same graphon.


Stability of Graph Neural Networks to Relative Perturbations, Fernando Gama

Graph neural networks (GNNs), consisting of a cascade of layers applying a graph convolution followed by a pointwise nonlinearity, have become a powerful architecture to process signals supported on graphs. Graph convolutions (and thus, GNNs), rely heavily on knowledge of the graph for operation. However, in many practical cases, the graph shift operator (GSO) is not known and needs to be estimated, or might change from training time to testing time. In this paper, we are set to study the effect that a change in the underlying graph topology that supports the signal has on the output of a GNN. We prove that graph convolutions with integral Lipschitz filters lead to GNNs whose output change is bounded by the size of the relative change in the topology. Furthermore, we leverage this result to show that the main reason for the success of GNNs is that they are stable architectures capable of discriminating features on high eigenvalues, which is a feat that cannot be achieved by linear graph filters (which are either stable or discriminative but cannot be both). Finally, we comment on the use of this result to train GNNs with increased stability and run experiments on movie recommendation systems.