Skip to content

Using this implementation for Global editing #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Nerdyvedi opened this issue Jul 21, 2020 · 9 comments
Open

Using this implementation for Global editing #3

Nerdyvedi opened this issue Jul 21, 2020 · 9 comments

Comments

@Nerdyvedi
Copy link

Is it possible to use this for Region editing and Global editing as mentioned in the paper.

The figure below is from the paper.

image

Thanks a lot.

@rosinality
Copy link
Owner

I haven't tried it, but I think you can implement it.

@Nerdyvedi
Copy link
Author

@rosinality . Would be grateful if you could guide me. I am trying to implement the Global editing feature.
From what I got , we need to edit the texture code to change global attributes like age, lighting and background. They edited the texture code using an interactive UI that performs vector arithmetic using the PCA components, But they never mentioned what vector arithmetic needs to be performed.

@rosinality
Copy link
Owner

Have you perform PCA on texture codes? Then you can move texture vectors along the principal components. For example you can choose principal components that explains largest variances and add it to texture vectors with some scalar. (texture vector + scalar * principal component) Please refer to this paper: https://arxiv.org/abs/2004.02546

Or you can try approaches like this https://arxiv.org/abs/2007.06600 that extracts eigenvector directly from weight matrices. I tried it on stylegan2, and maybe you can refer to it. https://github.com/rosinality/stylegan2-pytorch

@juzuo
Copy link

juzuo commented Dec 24, 2020

Hi @rosinality
Could you please share the code so we can quickly try it? Or please give us an instruction about how to use it on swapping autoencoder. Thanks!

@rosinality
Copy link
Owner

@juzuo You can do it like this:

import torch
from torch.utils import data
from torchvision import transforms
from model import Encoder, Generator
from stylegan2.dataset import MultiResolutionDataset
from tqdm import tqdm_notebook
from matplotlib import pyplot as plt

ckpt = 'checkpoint/ffhq-070000.pt'
dset = 'ffhq.lmdb'

ckpt = torch.load(ckpt, map_location=lambda storage, loc: storage)
transform = transforms.Compose(
    [
        transforms.ToTensor(),
        transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True),
    ]
)
dset = MultiResolutionDataset(dset, transform, 256)

device = 'cuda'
encoder = Encoder(32).to(device)
generator = Generator(32).to(device)
encoder.load_state_dict(ckpt['e_ema'])
generator.load_state_dict(ckpt['g_ema']);

textures = []
loader = data.DataLoader(dset, batch_size=256)

with torch.no_grad():
    for batch in tqdm_notebook(loader):
        _, texture = encoder(batch.to(device))
        textures.append(texture.to('cpu'))
        
texture_t = torch.cat(textures, 0)
texture_c = texture_t - texture_t.mean(0, keepdim=True)

U, S, V = torch.svd(texture_t)

dataset_i = 45000
scale = 100
direction_i = 1

with torch.no_grad():
    structure, texture = encoder(dset[dataset_i].unsqueeze(0).to(device))
    img = generator(structure, texture)
    img_e = generator(structure, texture + scale * V[:, direction_i].unsqueeze(0).to(device))
    
plt.imshow(torch.cat((img, img_e), 3).to('cpu').squeeze(0).add_(1).div_(2).clamp_(0, 1).permute(1, 2, 0).numpy())

@juzuo
Copy link

juzuo commented Dec 25, 2020

Thanks, my buddy! @rosinality I will try it ASAP.

@juzuo
Copy link

juzuo commented Dec 25, 2020

By the way, I have a question, is it possible to train the model by a single RTX 3090 GPU?

@rosinality
Copy link
Owner

@juzuo Batch size will be the problem. I think you can train if you use gradient accumulation.

@juzuo
Copy link

juzuo commented Dec 25, 2020

I see. Thanks buddy!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants