Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions about INN #25

Open
xuedue opened this issue Aug 18, 2021 · 1 comment
Open

Some questions about INN #25

xuedue opened this issue Aug 18, 2021 · 1 comment

Comments

@xuedue
Copy link

xuedue commented Aug 18, 2021

Thank you very much for sharing the code, I have a few questions that I want to bother you

  1. Glow is structurally reversible, that is, it is a reversible network without training, and your work is structurally irreversible and requires certain training to become a reversible network.

I don’t know if my understanding is correct.

  1. Can your work achieve the reversibility of the MLP network? If you can, can you tell me which part needs to be modified?

Looking forward to your reply !

@Laityned
Copy link

Dear xuedue,

The following paper describes the iresnet architecture.

In the paper about Glow it is explained that glow uses a architecture in such a way an inverse could be found for each mapping in feature space. Half the feature space is not modified and used as input for a neural network which outputs non-linear, input dependend weights which are used to change the second half of the feature space. Due to this approach the mapping from x to z is bijective. (The analytic inverse exist such that the mapping z -> x is possible)

However i-resnet uses a different approach to achieve this reversibility of the network. Using the banach fixed point theorem they prove that a unique fixed point (in other words a point on the line y=x) exists. In order to have a fixed point which could be found by a iterative algorithm it is required that the Lipschitz constrain is lower than 1. This is achieved by applying spectral normalization on the convolutional layers within a single resnet block.

Using the that the lipschitz < 1 for the convolutional layers in a single block and there is a residual, skip-over, connection. Men could show that the mapping of a single residual block is monotic (always increasing). From this property it could be concluded that there is a one-to-one mapping from the input to the output of a single residual block.

Using the above fact it is shown that the neural network is reversible by definition. Since you are using the banach fixed point theorem and the corresponding iterative algorithm to converge to the inverse of a block, you find only a numeric inverse for a a residual block.

Trying to find a analytic inverse for a residual block has shown untill to be to hard. However you could use this repo as is to create a network which is reversible

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants