© 2020, Springer Nature Switzerland AG. A deep generative model such as a GAN learns to model a rich set of semantic and physical rules about the target distribution, but up to now, it has been obscure how such rules are encoded in the network, or how a rule could be changed. In this paper, we introduce a new problem setting: manipulation of specific rules encoded by a deep generative model. To address the problem, we propose a formulation in which the desired rule is changed by manipulating a layer of a deep network as a linear associative memory. We derive an algorithm for modifying one entry of the associative memory, and we demonstrate that several interesting structural rules can be located and modified within the layers of state-of-th...
We propose a novel procedure which adds "content-addressability" to any given unconditional implicit...
The past years have seen a great progress of deep generative models, including Generative Adversaria...
The lottery ticket hypothesis suggests that sparse, sub-networks of a given neural network, if initi...
This paper presents the network bending framework, a new approach for manipulating and interacting w...
We introduce a new framework for manipulating and interacting with deep generative models that we ca...
This paper presents the network bending framework, a new approach for manipulating and interacting w...
This paper presents the network bending framework, a new approach for manipulating and interacting w...
Modern AI algorithms are rapidly becoming ubiquitous in everyday life and have even been touted as t...
We investigate the role of neurons within the internal computations of deep neural networks for comp...
Generative models, especially ones that are parametrized by deep neural networks, are powerful unsup...
Observations of various deep neural network architectures indicate that deep networks may be spontan...
We consider the problem of learning deep gener-ative models from data. We formulate a method that ge...
Deep generative models such as Generative Ad- versarial Networks (GANs) and Variational Auto- Encode...
We propose a novel procedure which adds "content-addressability" to any given unconditional implicit...
We propose a novel procedure which adds "content-addressability" to any given unconditional implicit...
We propose a novel procedure which adds "content-addressability" to any given unconditional implicit...
The past years have seen a great progress of deep generative models, including Generative Adversaria...
The lottery ticket hypothesis suggests that sparse, sub-networks of a given neural network, if initi...
This paper presents the network bending framework, a new approach for manipulating and interacting w...
We introduce a new framework for manipulating and interacting with deep generative models that we ca...
This paper presents the network bending framework, a new approach for manipulating and interacting w...
This paper presents the network bending framework, a new approach for manipulating and interacting w...
Modern AI algorithms are rapidly becoming ubiquitous in everyday life and have even been touted as t...
We investigate the role of neurons within the internal computations of deep neural networks for comp...
Generative models, especially ones that are parametrized by deep neural networks, are powerful unsup...
Observations of various deep neural network architectures indicate that deep networks may be spontan...
We consider the problem of learning deep gener-ative models from data. We formulate a method that ge...
Deep generative models such as Generative Ad- versarial Networks (GANs) and Variational Auto- Encode...
We propose a novel procedure which adds "content-addressability" to any given unconditional implicit...
We propose a novel procedure which adds "content-addressability" to any given unconditional implicit...
We propose a novel procedure which adds "content-addressability" to any given unconditional implicit...
The past years have seen a great progress of deep generative models, including Generative Adversaria...
The lottery ticket hypothesis suggests that sparse, sub-networks of a given neural network, if initi...