Counterfactuals uncover the modular structure of deep generative models

Published in ICLR, 2020

Recommended citation: "@article{besserve2018counterfactuals, title={Counterfactuals uncover the modular structure of deep generative models}, author={Besserve, Michel and Sun, R{\'e}my and Sch{\"o}lkopf, Bernhard}, journal={arXiv preprint arXiv:1812.03253}, year={2018} }" https://arxiv.org/pdf/1812.03253

Deep generative models such as Generative Adversarial Networks (GANs) and Variational Auto-Encoders (VAEs) are important tools to capture and investigate the properties of complex empirical data. However, the complexity of their inner elements makes their functioning challenging to assess and modify. In this respect, these architectures behave as black box models. In order to better understand the function of such networks, we analyze their modularity based on the counterfactual manipulation of their internal variables. Experiments with face images support that modularity between groups of channels is achieved to some degree within convolutional layers of vanilla VAE and GAN generators. This helps understand the functional organization of these systems and allows designing meaningful transformations of the generated images without further training.

Download paper here

Leave a Comment