Compare commits

...

16 Commits

Author SHA1 Message Date
bishe
77468f16f9 cptrans复现完成 2025-03-22 15:27:19 +08:00
bishe
c173e29ea6 final_version 2025-03-18 21:12:32 +08:00
bishe
537cb050a5 保存一个版本 2025-03-18 20:14:59 +08:00
bishe
f98c285950 CPTrans复现发现问题的最后一版 2025-03-15 15:00:00 +08:00
bishe
e0dc08030c cptrans复现 2025-03-09 21:41:52 +08:00
bishe
997fdd3770 改了D返回scores, features 2025-03-07 19:25:25 +08:00
bishe
14ba81514f 修改 2025-03-07 19:20:37 +08:00
bishe
c6cb68e700 尝试在每一步都给判别器看,但是速度太慢了 2025-03-07 18:43:06 +08:00
bishe
76fcec26e8 exp8 版本 2025-03-07 10:13:25 +08:00
bishe
2a0a56ac26 修改后的最新 2025-02-27 18:00:41 +08:00
bishe
7a6e856b4b running UNIV 2025-02-26 22:24:17 +08:00
bishe
e8e483fbf8 EDIT_DOWN 2025-02-26 22:07:11 +08:00
bishe
3c4d53377c EDIT_DOWN 2025-02-26 22:07:06 +08:00
bishe
6a2761be99 without cnt running 002 2025-02-24 23:35:03 +08:00
bishe
c2e6cfe0b1 running without cnt named 001 2025-02-24 23:10:23 +08:00
bishe
4af0d7463d withoutCNT 2025-02-24 23:00:25 +08:00
10010 changed files with 649 additions and 553 deletions

5
.gitignore vendored Normal file
View File

@ -0,0 +1,5 @@
checkpoints/
*.log
*.pth
*.ckpt
__pycache__/

View File

@ -1,70 +0,0 @@
================ Training Loss (Sun Feb 23 15:46:44 2025) ================
================ Training Loss (Sun Feb 23 15:52:29 2025) ================
================ Training Loss (Sun Feb 23 16:00:07 2025) ================
================ Training Loss (Sun Feb 23 16:02:40 2025) ================
================ Training Loss (Sun Feb 23 16:05:19 2025) ================
================ Training Loss (Sun Feb 23 16:06:44 2025) ================
================ Training Loss (Sun Feb 23 16:09:38 2025) ================
================ Training Loss (Sun Feb 23 16:44:56 2025) ================
================ Training Loss (Sun Feb 23 16:49:46 2025) ================
================ Training Loss (Sun Feb 23 16:51:03 2025) ================
================ Training Loss (Sun Feb 23 16:51:23 2025) ================
================ Training Loss (Sun Feb 23 18:04:02 2025) ================
================ Training Loss (Sun Feb 23 18:04:39 2025) ================
================ Training Loss (Sun Feb 23 18:05:17 2025) ================
================ Training Loss (Sun Feb 23 18:06:40 2025) ================
================ Training Loss (Sun Feb 23 18:11:48 2025) ================
================ Training Loss (Sun Feb 23 18:13:31 2025) ================
================ Training Loss (Sun Feb 23 18:14:11 2025) ================
================ Training Loss (Sun Feb 23 18:14:29 2025) ================
================ Training Loss (Sun Feb 23 18:16:27 2025) ================
================ Training Loss (Sun Feb 23 18:16:44 2025) ================
================ Training Loss (Sun Feb 23 18:20:39 2025) ================
================ Training Loss (Sun Feb 23 18:21:44 2025) ================
================ Training Loss (Sun Feb 23 18:35:27 2025) ================
================ Training Loss (Sun Feb 23 18:39:21 2025) ================
================ Training Loss (Sun Feb 23 18:40:15 2025) ================
================ Training Loss (Sun Feb 23 18:41:15 2025) ================
================ Training Loss (Sun Feb 23 18:47:46 2025) ================
================ Training Loss (Sun Feb 23 18:48:36 2025) ================
================ Training Loss (Sun Feb 23 18:50:20 2025) ================
================ Training Loss (Sun Feb 23 18:51:50 2025) ================
================ Training Loss (Sun Feb 23 18:58:45 2025) ================
================ Training Loss (Sun Feb 23 18:59:52 2025) ================
================ Training Loss (Sun Feb 23 19:03:05 2025) ================
================ Training Loss (Sun Feb 23 19:03:57 2025) ================
================ Training Loss (Sun Feb 23 21:11:47 2025) ================
================ Training Loss (Sun Feb 23 21:17:10 2025) ================
================ Training Loss (Sun Feb 23 21:20:14 2025) ================
================ Training Loss (Sun Feb 23 21:29:03 2025) ================
================ Training Loss (Sun Feb 23 21:34:57 2025) ================
================ Training Loss (Sun Feb 23 21:35:26 2025) ================
================ Training Loss (Sun Feb 23 22:28:43 2025) ================
================ Training Loss (Sun Feb 23 22:29:04 2025) ================
================ Training Loss (Sun Feb 23 22:29:52 2025) ================
================ Training Loss (Sun Feb 23 22:30:40 2025) ================
================ Training Loss (Sun Feb 23 22:33:48 2025) ================
================ Training Loss (Sun Feb 23 22:39:16 2025) ================
================ Training Loss (Sun Feb 23 22:39:48 2025) ================
================ Training Loss (Sun Feb 23 22:41:34 2025) ================
================ Training Loss (Sun Feb 23 22:42:01 2025) ================
================ Training Loss (Sun Feb 23 22:44:17 2025) ================
================ Training Loss (Sun Feb 23 22:45:53 2025) ================
================ Training Loss (Sun Feb 23 22:46:48 2025) ================
================ Training Loss (Sun Feb 23 22:47:42 2025) ================
================ Training Loss (Sun Feb 23 22:49:44 2025) ================
================ Training Loss (Sun Feb 23 22:50:29 2025) ================
================ Training Loss (Sun Feb 23 22:51:47 2025) ================
================ Training Loss (Sun Feb 23 22:55:56 2025) ================
================ Training Loss (Sun Feb 23 22:56:19 2025) ================
================ Training Loss (Sun Feb 23 22:57:58 2025) ================
================ Training Loss (Sun Feb 23 22:59:09 2025) ================
================ Training Loss (Sun Feb 23 23:02:36 2025) ================
================ Training Loss (Sun Feb 23 23:03:56 2025) ================
================ Training Loss (Sun Feb 23 23:09:21 2025) ================
================ Training Loss (Sun Feb 23 23:10:05 2025) ================
================ Training Loss (Sun Feb 23 23:11:43 2025) ================
================ Training Loss (Sun Feb 23 23:12:41 2025) ================
================ Training Loss (Sun Feb 23 23:13:05 2025) ================
================ Training Loss (Sun Feb 23 23:13:59 2025) ================
================ Training Loss (Sun Feb 23 23:14:59 2025) ================

View File

@ -1,87 +0,0 @@
----------------- Options ---------------
atten_layers: 5
batch_size: 1
beta1: 0.5
beta2: 0.999
checkpoints_dir: ./checkpoints
continue_train: False
crop_size: 256
dataroot: /home/openxs/kunyu/datasets/InfraredCity-Lite/Double/Moitor [default: placeholder]
dataset_mode: unaligned_double [default: unaligned]
direction: AtoB
display_env: ROMA [default: main]
display_freq: 50
display_id: None
display_ncols: 4
display_port: 8097
display_server: http://localhost
display_winsize: 256
easy_label: experiment_name
epoch: latest
epoch_count: 1
eta_ratio: 0.1
evaluation_freq: 5000
flip_equivariance: False
gan_mode: lsgan
gpu_ids: 0
init_gain: 0.02
init_type: xavier
input_nc: 3
isTrain: True [default: None]
lambda_D_ViT: 1.0
lambda_GAN: 8.0 [default: 1.0]
lambda_NCE: 8.0 [default: 1.0]
lambda_SB: 0.1
lambda_ctn: 1.0
lambda_global: 1.0
lambda_inc: 1.0
lmda_1: 0.1
load_size: 286
lr: 1e-05 [default: 0.0002]
lr_decay_iters: 50
lr_policy: linear
max_dataset_size: inf
model: roma_unsb [default: cut]
n_epochs: 100
n_epochs_decay: 100
n_layers_D: 3
n_mlp: 3
name: ROMA_UNSB_001 [default: experiment_name]
nce_T: 0.07
nce_idt: False [default: True]
nce_includes_all_negatives_from_minibatch: False
nce_layers: 0,4,8,12,16
ndf: 64
netD: basic_cond
netF: mlp_sample
netF_nc: 256
netG: resnet_9blocks_cond
ngf: 64
no_antialias: False
no_antialias_up: False
no_dropout: True
no_flip: True [default: False]
no_html: False
normD: instance
normG: instance
num_patches: 256
num_threads: 4
num_timesteps: 10 [default: 5]
output_nc: 3
phase: train
pool_size: 0
preprocess: resize_and_crop
pretrained_name: None
print_freq: 100
random_scale_max: 3.0
save_by_iter: False
save_epoch_freq: 5
save_latest_freq: 5000
serial_batches: False
stylegan2_G_num_downsampling: 1
suffix:
tau: 0.01
update_html_freq: 1000
use_idt: False
verbose: False
----------------- End -------------------

View File

@ -1411,7 +1411,6 @@ class MLPDiscriminator(nn.Module):
self.activation = nn.GELU() self.activation = nn.GELU()
self.linear2 = nn.Linear(hid_feat, out_feat) self.linear2 = nn.Linear(hid_feat, out_feat)
self.dropout = nn.Dropout(dropout) self.dropout = nn.Dropout(dropout)
def forward(self, x): def forward(self, x):
x = self.linear1(x) x = self.linear1(x)
x = self.activation(x) x = self.activation(x)
@ -1419,7 +1418,6 @@ class MLPDiscriminator(nn.Module):
x = self.linear2(x) x = self.linear2(x)
return self.dropout(x) return self.dropout(x)
class NLayerDiscriminator(nn.Module): class NLayerDiscriminator(nn.Module):
"""Defines a PatchGAN discriminator""" """Defines a PatchGAN discriminator"""

View File

@ -2,6 +2,7 @@ import numpy as np
import math import math
import timm import timm
import torch import torch
import torchvision.models as models
import torch.nn as nn import torch.nn as nn
import torch.nn.functional as F import torch.nn.functional as F
from torchvision.transforms import GaussianBlur from torchvision.transforms import GaussianBlur
@ -12,6 +13,7 @@ import util.util as util
from torchvision.transforms import transforms as tfs from torchvision.transforms import transforms as tfs
def warp(image, flow): #warp操作 def warp(image, flow): #warp操作
""" """
基于光流的图像变形函数 基于光流的图像变形函数
@ -36,121 +38,74 @@ def warp(image, flow): #warp操作
# 双线性插值 # 双线性插值
return F.grid_sample(image, new_grid, align_corners=True) return F.grid_sample(image, new_grid, align_corners=True)
# 时序归一化损失计算
def compute_ctn_loss(G, x, F_content): #公式10
"""
计算内容感知时序归一化损失
Args:
G: 生成器
x: 输入红外图像 [B,C,H,W]
F_content: 生成的光流场 [B,2,H,W]
"""
# 生成可见光图像
y_fake = G(x) # [B,3,H,W]
# 对生成结果应用光流变形
warped_fake = warp(y_fake, F_content) # [B,3,H,W]
# 对输入应用相同光流后生成图像
warped_x = warp(x, F_content) # [B,C,H,W]
y_fake_warped = G(warped_x) # [B,3,H,W]
# 计算L2损失
loss = F.mse_loss(warped_fake, y_fake_warped)
return loss
class ContentAwareOptimization(nn.Module): class ContentAwareOptimization(nn.Module):
def __init__(self, lambda_inc=2.0, eta_ratio=0.4): def __init__(self, lambda_inc=2.0, eta_ratio=0.4):
super().__init__() super().__init__()
self.lambda_inc = lambda_inc # 权重增强系数 self.lambda_inc = lambda_inc # 控制内容丰富区域的权重增量
self.eta_ratio = eta_ratio # 选择内容区域的比例 self.eta_ratio = eta_ratio # 选择内容丰富区域的比例
self.criterionGAN = networks.GANLoss('lsgan').cuda() # 使用 LSGAN 损失
def compute_cosine_similarity(self, gradients):
"""
计算每个patch梯度与平均梯度的余弦相似度
Args:
gradients: [B, N, D] 判别器输出的每个patch的梯度(N=w*h)
Returns:
cosine_sim: [B, N] 每个patch的余弦相似度
"""
mean_grad = torch.mean(gradients, dim=1, keepdim=True) # [B, 1, D]
# 计算余弦相似度
cosine_sim = F.cosine_similarity(gradients, mean_grad, dim=2) # [B, N]
return cosine_sim
def generate_weight_map(self, gradients_fake): def compute_cosine_similarity(self, grad_patch, grad_mean):
""" """
生成内容感知权重图 计算每个 token 梯度与整体平均梯度的余弦相似度
Args: Args:
gradients_fake: [B, N, D] 生成图像判别器梯度 [2,3,256,256] grad_patch: [B, N, D]每个 token 的梯度来自 scores
grad_mean: [B, D]整体平均梯度
Returns: Returns:
weight_fake: [B, N] 生成图像权重图 [2,3,256] cosine: [B, N]余弦相似度 δ_i
""" """
# 计算生成图像块的余弦相似度 # 对每个 token 计算余弦相似度
cosine_fake = self.compute_cosine_similarity(gradients_fake) # [B, N] cosine = F.cosine_similarity(grad_patch, grad_mean.unsqueeze(1), dim=2) # [B, N]
return cosine
# 选择内容丰富的区域(余弦相似度最低的eta_ratio比例)
k = int(self.eta_ratio * cosine_fake.shape[1])
# 对生成图像生成权重图(同理)
_, fake_indices = torch.topk(-cosine_fake, k, dim=1)
weight_fake = torch.ones_like(cosine_fake)
for b in range(cosine_fake.shape[0]):
weight_fake[b, fake_indices[b]] = self.lambda_inc / (1e-6 + torch.abs(cosine_fake[b, fake_indices[b]]))
return weight_fake
def forward(self, D_real, D_fake, real_scores, fake_scores): def generate_weight_map(self, cosine):
""" """
计算内容感知对抗损失 根据余弦相似度生成权重图
Args: Args:
D_real: 判别器对真实图像的特征输出 [B, C, H, W] cosine: [B, N]余弦相似度 δ_i
D_fake: 判别器对生成图像的特征输出 [B, C, H, W]
real_scores: 真实图像的判别器预测 [B, N] (N=H*W)
fake_scores: 生成图像的判别器预测 [B, N]
Returns: Returns:
loss_co_adv: 内容感知对抗损失 weights: [B, N]权重图 w_i
""" """
B, C, H, W = D_real.shape B, N = cosine.shape
N = H * W k = int(self.eta_ratio * N) # 选择 eta_ratio 比例的 token
_, indices = torch.topk(-cosine, k, dim=1) # 选择偏离最大的 k 个 token
weights = torch.ones_like(cosine)
for b in range(B):
selected_cosine = cosine[b, indices[b]]
weights[b, indices[b]] = self.lambda_inc / (torch.exp(torch.abs(selected_cosine)) + 1e-6)
return weights
def forward(self, scores, target):
"""
前向传播计算加权后的 GAN 损失
Args:
scores: [B, N, D]判别器的预测得分
target: 目标标签True False
Returns:
weighted_loss: 加权后的 GAN 损失
weight: 权重图 [B, N]
"""
# 计算原始 GAN 损失(假设 criterionGAN 返回 [B, N] 的损失分布)
loss = self.criterionGAN(scores, target)
# 注册钩子获取梯度 # 捕获 scores 的梯度,形状为 [B, N, D]
gradients_real = [] grad_scores = torch.autograd.grad(loss, scores, retain_graph=True)[0]
gradients_fake = []
def hook_real(grad): # 计算整体平均梯度(在 N 维度上求均值)
gradients_real.append(grad.detach().view(B, N, -1)) grad_mean = torch.mean(grad_scores, dim=1) # [B, D]
def hook_fake(grad): # 计算余弦相似度 δ_i
gradients_fake.append(grad.detach().view(B, N, -1)) cosine = self.compute_cosine_similarity(grad_scores, grad_mean) # [B, N]
D_real.register_hook(hook_real) # 生成权重图 w_i
D_fake.register_hook(hook_fake) weight = self.generate_weight_map(cosine) # [B, N]
# 计算原始对抗损失以触发梯度计算 # 计算加权后的 GAN 损失
loss_real = torch.mean(torch.log(real_scores + 1e-8)) weighted_loss = torch.mean(weight * self.criterionGAN(scores, target))
loss_fake = torch.mean(torch.log(1 - fake_scores + 1e-8))
# 添加与 D_real、D_fake 相关的 dummy 项,确保梯度传递
loss_dummy = 1e-8 * (D_real.sum() + D_fake.sum())
total_loss = loss_real + loss_fake + loss_dummy
total_loss.backward(retain_graph=True)
# 获取梯度数据 return weighted_loss, weight
gradients_real = gradients_real[0] # [B, N, D]
gradients_fake = gradients_fake[0] # [B, N, D]
# 生成权重图
self.weight_real, self.weight_fake = self.generate_weight_map(gradients_real, gradients_fake)
# 应用权重到对抗损失
loss_co_real = torch.mean(self.weight_real * torch.log(real_scores + 1e-8))
loss_co_fake = torch.mean(self.weight_fake * torch.log(1 - fake_scores + 1e-8))
# 计算并返回最终内容感知对抗损失
loss_co_adv = -(loss_co_real + loss_co_fake)
return loss_co_adv
class ContentAwareTemporalNorm(nn.Module): class ContentAwareTemporalNorm(nn.Module):
def __init__(self, gamma_stride=0.1, kernel_size=21, sigma=5.0): def __init__(self, gamma_stride=0.1, kernel_size=21, sigma=5.0):
@ -158,38 +113,52 @@ class ContentAwareTemporalNorm(nn.Module):
self.gamma_stride = gamma_stride # 控制整体运动幅度 self.gamma_stride = gamma_stride # 控制整体运动幅度
self.smoother = GaussianBlur(kernel_size, sigma=sigma) # 高斯平滑层 self.smoother = GaussianBlur(kernel_size, sigma=sigma) # 高斯平滑层
def upsample_weight_map(self, weight_patch, target_size=(256, 256)):
# weight_patch: [B, 1, H, W] 来自转换后的 weight_map
weight_full = F.interpolate(
weight_patch,
size=target_size,
mode='bilinear', # 或 'nearest',根据需求选择
align_corners=False
)
return weight_full
def forward(self, weight_map): def forward(self, weight_map):
""" """
生成内容感知光流 生成内容感知光流
Args: Args:
weight_map: [B, 1, H, W] 权重图(来自内容感知优化模块) weight_map: [B, N] 权重图(来自 ContentAwareOptimization)其中 N=576
Returns: Returns:
F_content: [B, 2, H, W] 生成的光流场(x/y方向位移) F_content: [B, 2, H, W] 生成的光流场(x/y方向位移)
""" """
print(weight_map.shape) B = weight_map.shape[0]
B, _, H, W = weight_map.shape N = weight_map.shape[1]
# 假设 N 为完全平方数,计算边长(例如 576 -> 24x24
side = int(math.sqrt(N))
weight_map_2d = weight_map.view(B, 1, side, side) # 转换为 [B, 1, side, side]
# 1. 归一化权重图 # 上采样权重图到全分辨率
# 保持区域相对强度,同时限制数值范围 weight_full = self.upsample_weight_map(weight_map_2d) # [B, 1, 256, 256](例如)
weight_norm = F.normalize(weight_map, p=1, dim=(2,3)) # L1归一化 [B,1,H,W]
# 2. 生成高斯噪声(与光流场同尺寸) # 归一化权重图L1归一化
z = torch.randn(B, 2, H, W, device=weight_map.device) # [B,2,H,W] weight_norm = F.normalize(weight_full, p=1, dim=(2,3))
# 3. 合成基础光流 # 生成高斯噪声
# 将权重图扩展为2通道(x/y方向共享权重) B, _, H, W = weight_norm.shape
weight_expanded = weight_norm.expand(-1, 2, -1, -1) # [B,2,H,W] z = torch.randn(B, 2, H, W, device=weight_norm.device)
F_raw = self.gamma_stride * weight_expanded * z # [B,2,H,W] #公式9
# 4. 平滑处理(保持结构连续性) # 合成基础光流
# 对每个通道独立进行高斯模糊 weight_expanded = weight_norm.expand(-1, 2, -1, -1)
F_smooth = self.smoother(F_raw) # [B,2,H,W] F_raw = self.gamma_stride * weight_expanded * z
# 5. 动态范围调整(可选) # 平滑处理
# 限制光流幅值,避免极端位移 F_smooth = self.smoother(F_raw)
F_content = torch.tanh(F_smooth) # 缩放到[-1,1]范围
return F_content # 动态范围调整
F_content = torch.tanh(F_smooth)
return F_content
class RomaUnsbModel(BaseModel): class RomaUnsbModel(BaseModel):
@staticmethod @staticmethod
@ -197,31 +166,18 @@ class RomaUnsbModel(BaseModel):
"""配置 CTNx 模型的特定选项""" """配置 CTNx 模型的特定选项"""
parser.add_argument('--lambda_GAN', type=float, default=1.0, help='weight for GAN loss: GAN(G(X))') parser.add_argument('--lambda_GAN', type=float, default=1.0, help='weight for GAN loss: GAN(G(X))')
parser.add_argument('--lambda_NCE', type=float, default=1.0, help='weight for NCE loss: NCE(G(X), X)')
parser.add_argument('--lambda_SB', type=float, default=0.1, help='weight for SB loss')
parser.add_argument('--lambda_ctn', type=float, default=1.0, help='weight for content-aware temporal norm') parser.add_argument('--lambda_ctn', type=float, default=1.0, help='weight for content-aware temporal norm')
parser.add_argument('--lambda_D_ViT', type=float, default=1.0, help='weight for discriminator') parser.add_argument('--lambda_D_ViT', type=float, default=1.0, help='weight for discriminator')
parser.add_argument('--lambda_global', type=float, default=1.0, help='weight for Global Structural Consistency') parser.add_argument('--lambda_global', type=float, default=1.0, help='weight for Global Structural Consistency')
parser.add_argument('--lambda_spatial', type=float, default=1.0, help='weight for Local Structural Consistency')
parser.add_argument('--nce_idt', type=util.str2bool, nargs='?', const=True, default=False, help='use NCE loss for identity mapping: NCE(G(Y), Y))')
parser.add_argument('--nce_layers', type=str, default='0,4,8,12,16', help='compute NCE loss on which layers')
parser.add_argument('--nce_includes_all_negatives_from_minibatch',
type=util.str2bool, nargs='?', const=True, default=False,
help='(used for single image translation) If True, include the negatives from the other samples of the minibatch when computing the contrastive loss. Please see models/patchnce.py for more details.')
parser.add_argument('--netF', type=str, default='mlp_sample', choices=['sample', 'reshape', 'mlp_sample'], help='how to downsample the feature map')
parser.add_argument('--netF_nc', type=int, default=256)
parser.add_argument('--nce_T', type=float, default=0.07, help='temperature for NCE loss')
parser.add_argument('--lmda_1', type=float, default=0.1)
parser.add_argument('--num_patches', type=int, default=256, help='number of patches per layer')
parser.add_argument('--flip_equivariance',
type=util.str2bool, nargs='?', const=True, default=False,
help="Enforce flip-equivariance as additional regularization. It's used by FastCUT, but not CUT")
parser.add_argument('--lambda_inc', type=float, default=1.0, help='incremental weight for content-aware optimization') parser.add_argument('--lambda_inc', type=float, default=1.0, help='incremental weight for content-aware optimization')
parser.add_argument('--eta_ratio', type=float, default=0.1, help='ratio of content-rich regions') parser.add_argument('--local_nums', type=int, default=64, help='number of local patches')
parser.add_argument('--side_length', type=int, default=7)
parser.add_argument('--nce_layers', type=str, default='0,4,8,12,16', help='compute NCE loss on which layers')
parser.add_argument('--eta_ratio', type=float, default=0.4, help='ratio of content-rich regions')
parser.add_argument('--gamma_stride', type=float, default=20, help='ratio of stride for computing the similarity matrix')
parser.add_argument('--atten_layers', type=str, default='5', help='compute Cross-Similarity on which layers') parser.add_argument('--atten_layers', type=str, default='5', help='compute Cross-Similarity on which layers')
parser.add_argument('--tau', type=float, default=0.01, help='Entropy parameter') parser.add_argument('--tau', type=float, default=0.01, help='Entropy parameter')
@ -229,12 +185,7 @@ class RomaUnsbModel(BaseModel):
parser.add_argument('--n_mlp', type=int, default=3, help='only used if netD==n_layers') parser.add_argument('--n_mlp', type=int, default=3, help='only used if netD==n_layers')
parser.set_defaults(pool_size=0) # no image pooling
opt, _ = parser.parse_known_args() opt, _ = parser.parse_known_args()
# 直接设置为 sb 模式
parser.set_defaults(nce_idt=True, lambda_NCE=1.0)
return parser return parser
@ -243,11 +194,11 @@ class RomaUnsbModel(BaseModel):
BaseModel.__init__(self, opt) BaseModel.__init__(self, opt)
# 指定需要打印的训练损失 # 指定需要打印的训练损失
self.loss_names = ['G_GAN_1', 'D_real_1', 'D_fake_1', 'G_1', 'NCE_1', 'SB_1', self.loss_names = ['G_GAN', 'D_ViT', 'G', 'global', 'spatial','ctn']
'G_2'] self.visual_names = ['real_A0', 'fake_B0', 'real_B0','real_A1', 'fake_B1', 'real_B1']
self.visual_names = ['real_A', 'real_A_noisy', 'fake_B', 'real_B']
self.atten_layers = [int(i) for i in self.opt.atten_layers.split(',')] self.atten_layers = [int(i) for i in self.opt.atten_layers.split(',')]
if self.opt.phase == 'test': if self.opt.phase == 'test':
self.visual_names = ['real'] self.visual_names = ['real']
for NFE in range(self.opt.num_timesteps): for NFE in range(self.opt.num_timesteps):
@ -255,24 +206,18 @@ class RomaUnsbModel(BaseModel):
self.visual_names.append(fake_name) self.visual_names.append(fake_name)
self.nce_layers = [int(i) for i in self.opt.nce_layers.split(',')] self.nce_layers = [int(i) for i in self.opt.nce_layers.split(',')]
if opt.nce_idt and self.isTrain:
self.loss_names += ['NCE_Y']
self.visual_names += ['idt_B']
if self.isTrain: if self.isTrain:
self.model_names = ['G', 'D_ViT', 'E'] self.model_names = ['G', 'D_ViT']
else: else:
self.model_names = ['G'] self.model_names = ['G']
print(f'input_nc = {self.opt.input_nc}')
# 创建网络 # 创建网络
self.netG = networks.define_G(opt.input_nc, opt.output_nc, opt.ngf, opt.netG, opt.normG, not opt.no_dropout, opt.init_type, opt.init_gain, opt.no_antialias, opt.no_antialias_up, self.gpu_ids, opt) self.netG = networks.define_G(opt.input_nc, opt.output_nc, opt.ngf, opt.netG, opt.normG, not opt.no_dropout, opt.init_type, opt.init_gain, opt.no_antialias, opt.no_antialias_up, self.gpu_ids, opt)
if self.isTrain: if self.isTrain:
self.netE = networks.define_D(opt.output_nc*4, opt.ndf, opt.netD, opt.n_layers_D, opt.normD, opt.init_type, opt.init_gain, opt.no_antialias, self.gpu_ids, opt)
self.resize = tfs.Resize(size=(384,384), antialias=True) self.resize = tfs.Resize(size=(384,384), antialias=True)
@ -284,14 +229,9 @@ class RomaUnsbModel(BaseModel):
# 定义损失函数 # 定义损失函数
self.criterionL1 = torch.nn.L1Loss().to(self.device) self.criterionL1 = torch.nn.L1Loss().to(self.device)
self.criterionGAN = networks.GANLoss(opt.gan_mode).to(self.device) self.criterionGAN = networks.GANLoss(opt.gan_mode).to(self.device)
self.criterionNCE = []
for nce_layer in self.nce_layers:
self.criterionNCE.append(PatchNCELoss(opt).to(self.device))
self.criterionIdt = torch.nn.L1Loss().to(self.device)
self.optimizer_G = torch.optim.Adam(self.netG.parameters(), lr=opt.lr, betas=(opt.beta1, opt.beta2)) self.optimizer_G = torch.optim.Adam(self.netG.parameters(), lr=opt.lr, betas=(opt.beta1, opt.beta2))
self.optimizer_D = torch.optim.Adam(self.netD_ViT.parameters(), lr=opt.lr, betas=(opt.beta1, opt.beta2)) self.optimizer_D = torch.optim.Adam(self.netD_ViT.parameters(), lr=opt.lr, betas=(opt.beta1, opt.beta2))
self.optimizer_E = torch.optim.Adam(self.netE.parameters(), lr=opt.lr, betas=(opt.beta1, opt.beta2)) self.optimizers = [self.optimizer_G, self.optimizer_D]
self.optimizers = [self.optimizer_G, self.optimizer_D, self.optimizer_E]
self.cao = ContentAwareOptimization(opt.lambda_inc, opt.eta_ratio) #损失函数 self.cao = ContentAwareOptimization(opt.lambda_inc, opt.eta_ratio) #损失函数
self.ctn = ContentAwareTemporalNorm() #生成的伪光流 self.ctn = ContentAwareTemporalNorm() #生成的伪光流
@ -303,19 +243,6 @@ class RomaUnsbModel(BaseModel):
initialized at the first feedforward pass with some input images. initialized at the first feedforward pass with some input images.
Please also see PatchSampleF.create_mlp(), which is called at the first forward() call. Please also see PatchSampleF.create_mlp(), which is called at the first forward() call.
""" """
#bs_per_gpu = data["A"].size(0) // max(len(self.opt.gpu_ids), 1)
#self.set_input(data)
#self.real_A = self.real_A[:bs_per_gpu]
#self.real_B = self.real_B[:bs_per_gpu]
#self.forward() # compute fake images: G(A)
#if self.opt.isTrain:
#
# self.compute_G_loss().backward()
# self.compute_D_loss().backward()
# self.compute_E_loss().backward()
# if self.opt.lambda_NCE > 0.0:
# self.optimizer_F = torch.optim.Adam(self.netF.parameters(), lr=self.opt.lr, betas=(self.opt.beta1, self.opt.beta2))
# self.optimizers.append(self.optimizer_F)
pass pass
def optimize_parameters(self): def optimize_parameters(self):
@ -323,7 +250,6 @@ class RomaUnsbModel(BaseModel):
self.forward() self.forward()
self.netG.train() self.netG.train()
self.netE.train()
self.netD_ViT.train() self.netD_ViT.train()
# update D # update D
@ -333,19 +259,9 @@ class RomaUnsbModel(BaseModel):
self.loss_D.backward() self.loss_D.backward()
self.optimizer_D.step() self.optimizer_D.step()
# update E
self.set_requires_grad(self.netE, True)
self.optimizer_E.zero_grad()
self.loss_E = self.compute_E_loss()
self.loss_E.backward()
self.optimizer_E.step()
# update G # update G
self.set_requires_grad(self.netD_ViT, False) self.set_requires_grad(self.netD_ViT, False)
self.set_requires_grad(self.netE, False)
self.optimizer_G.zero_grad() self.optimizer_G.zero_grad()
self.loss_G = self.compute_G_loss() self.loss_G = self.compute_G_loss()
self.loss_G.backward() self.loss_G.backward()
self.optimizer_G.step() self.optimizer_G.step()
@ -365,223 +281,110 @@ class RomaUnsbModel(BaseModel):
self.image_paths = input['A_paths' if AtoB else 'B_paths'] self.image_paths = input['A_paths' if AtoB else 'B_paths']
def tokens_concat(self, origin_tokens, adjacent_size):
adj_size = adjacent_size
B, token_num, C = origin_tokens.shape[0], origin_tokens.shape[1], origin_tokens.shape[2]
S = int(math.sqrt(token_num))
if S * S != token_num:
print('Error! Not a square!')
token_map = origin_tokens.clone().reshape(B,S,S,C)
cut_patch_list = []
for i in range(0, S, adj_size):
for j in range(0, S, adj_size):
i_left = i
i_right = i + adj_size + 1 if i + adj_size <= S else S + 1
j_left = j
j_right = j + adj_size if j + adj_size <= S else S + 1
cut_patch = token_map[:, i_left:i_right, j_left: j_right, :]
cut_patch= cut_patch.reshape(B,-1,C)
cut_patch = torch.mean(cut_patch, dim=1, keepdim=True)
cut_patch_list.append(cut_patch)
result = torch.cat(cut_patch_list,dim=1)
return result
def cat_results(self, origin_tokens, adj_size_list):
res_list = [origin_tokens]
for ad_s in adj_size_list:
cat_result = self.tokens_concat(origin_tokens, ad_s)
res_list.append(cat_result)
result = torch.cat(res_list, dim=1)
return result
def forward(self): def forward(self):
"""Run forward pass; called by both functions <optimize_parameters> and <test>.""" """Run forward pass; called by both functions <optimize_parameters> and <test>."""
self.fake_B0 = self.netG(self.real_A0)
self.fake_B1 = self.netG(self.real_A1)
# ============ 第一步:对 real_A / real_A2 进行多步随机生成过程 ============ if self.opt.isTrain:
tau = self.opt.tau
T = self.opt.num_timesteps
incs = np.array([0] + [1/(i+1) for i in range(T-1)])
times = np.cumsum(incs)
times = times / times[-1]
times = 0.5 * times[-1] + 0.5 * times #[0.5,1]
times = np.concatenate([np.zeros(1), times])
times = torch.tensor(times).float().cuda()
self.times = times
bs = self.real_A0.size(0)
time_idx = (torch.randint(T, size=[1]).cuda() * torch.ones(size=[1]).cuda()).long()
self.time_idx = time_idx
with torch.no_grad():
self.netG.eval()
# ============ 第二步:对 real_A / real_A2 进行多步随机生成过程 ============
for t in range(self.time_idx.int().item() + 1):
# 计算增量 delta 与 inter/scale用于每个时间步的插值等
if t > 0:
delta = times[t] - times[t - 1]
denom = times[-1] - times[t - 1]
inter = (delta / denom).reshape(-1, 1, 1, 1)
scale = (delta * (1 - delta / denom)).reshape(-1, 1, 1, 1)
# 对 Xt、Xt2 进行随机噪声更新
Xt = self.real_A0 if (t == 0) else (1 - inter) * Xt + inter * Xt_1.detach() + \
(scale * tau).sqrt() * torch.randn_like(Xt).to(self.real_A0.device)
time_idx = (t * torch.ones(size=[self.real_A0.shape[0]]).to(self.real_A0.device)).long()
z = torch.randn(size=[self.real_A0.shape[0], 4 * self.opt.ngf]).to(self.real_A0.device)
self.time = times[time_idx]
Xt_1 = self.netG(Xt, self.time, z)
Xt2 = self.real_A1 if (t == 0) else (1 - inter) * Xt2 + inter * Xt_12.detach() + \
(scale * tau).sqrt() * torch.randn_like(Xt2).to(self.real_A1.device)
time_idx = (t * torch.ones(size=[self.real_A1.shape[0]]).to(self.real_A1.device)).long()
z = torch.randn(size=[self.real_A1.shape[0], 4 * self.opt.ngf]).to(self.real_A1.device)
Xt_12 = self.netG(Xt2, self.time, z)
# 保存去噪后的中间结果 (real_A_noisy 等),供下一步做拼接
self.real_A_noisy = Xt.detach()
self.real_A_noisy2 = Xt2.detach()
# ============ 第三步:拼接输入并执行网络推理 =============
bs = self.real_A0.size(0)
z_in = torch.randn(size=[bs, 4 * self.opt.ngf]).to(self.real_A0.device)
z_in2 = torch.randn(size=[bs, 4 * self.opt.ngf]).to(self.real_A1.device)
# 将 real_A, real_B 拼接 (如 nce_idt=True),并同样处理 real_A_noisy 与 XtB
self.real = self.real_A0
self.realt = self.real_A_noisy
if self.opt.flip_equivariance:
self.flipped_for_equivariance = self.opt.isTrain and (np.random.random() < 0.5)
if self.flipped_for_equivariance:
self.real = torch.flip(self.real, [3])
self.realt = torch.flip(self.realt, [3])
print(f'fake_B0: {self.real_A0.shape}, fake_B1: {self.real_A1.shape}')
self.fake_B0 = self.netG(self.real_A0, self.time, z_in)
self.fake_B1 = self.netG(self.real_A1, self.time, z_in2)
print(f'fake_B0: {self.fake_B0.shape}, fake_B1: {self.fake_B1.shape}')
if self.opt.phase == 'train':
real_A0 = self.real_A0 real_A0 = self.real_A0
real_A1 = self.real_A1 real_A1 = self.real_A1
real_B0 = self.real_B0 real_B0 = self.real_B0
real_B1 = self.real_B1 real_B1 = self.real_B1
fake_B0 = self.fake_B0 fake_B0 = self.fake_B0
fake_B1 = self.fake_B1 fake_B1 = self.fake_B1
self.real_A0_resize = self.resize(real_A0) self.real_A0_resize = self.resize(real_A0)
self.real_A1_resize = self.resize(real_A1) self.real_A1_resize = self.resize(real_A1)
real_B0 = self.resize(real_B0) real_B0 = self.resize(real_B0)
real_B1 = self.resize(real_B1) real_B1 = self.resize(real_B1)
self.fake_B0_resize = self.resize(fake_B0) self.fake_B0_resize = self.resize(fake_B0)
self.fake_B1_resize = self.resize(fake_B1) self.fake_B1_resize = self.resize(fake_B1)
self.mutil_real_A0_tokens = self.netPreViT(self.real_A0_resize, self.atten_layers, get_tokens=True) self.mutil_real_A0_tokens = self.netPreViT(self.real_A0_resize, self.atten_layers, get_tokens=True)
self.mutil_real_A1_tokens = self.netPreViT(self.real_A1_resize, self.atten_layers, get_tokens=True) self.mutil_real_A1_tokens = self.netPreViT(self.real_A1_resize, self.atten_layers, get_tokens=True)
self.mutil_real_B0_tokens = self.netPreViT(real_B0, self.atten_layers, get_tokens=True) self.mutil_real_B0_tokens = self.netPreViT(real_B0, self.atten_layers, get_tokens=True)
self.mutil_real_B1_tokens = self.netPreViT(real_B1, self.atten_layers, get_tokens=True) self.mutil_real_B1_tokens = self.netPreViT(real_B1, self.atten_layers, get_tokens=True)
self.mutil_fake_B0_tokens = self.netPreViT(self.fake_B0_resize, self.atten_layers, get_tokens=True) self.mutil_fake_B0_tokens = self.netPreViT(self.fake_B0_resize, self.atten_layers, get_tokens=True)
self.mutil_fake_B1_tokens = self.netPreViT(self.fake_B1_resize, self.atten_layers, get_tokens=True) self.mutil_fake_B1_tokens = self.netPreViT(self.fake_B1_resize, self.atten_layers, get_tokens=True)
# [[1,576,768],[1,576,768],[1,576,768]]
# [3,576,768]
## 生成图像的梯度
#fake_gradient = torch.autograd.grad(self.mutil_fake_B0_tokens.sum(), self.mutil_fake_B0_tokens, create_graph=True)[0]
#
## 梯度图
#self.weight_fake = self.cao.generate_weight_map(fake_gradient)
#
## 生成图像的CTN光流图
#self.f_content = self.ctn(self.weight_fake)
#
## 变换后的图片
#self.warped_real_A_noisy2 = warp(self.real_A_noisy, self.f_content)
#self.warped_fake_B0 = warp(self.fake_B0,self.f_content)
#
## 经过第二次生成器
#self.warped_fake_B0_2 = self.netG(self.warped_real_A_noisy2, self.time, z_in)
#warped_fake_B0_2=self.warped_fake_B0_2
#warped_fake_B0=self.warped_fake_B0
#self.warped_fake_B0_2_resize = self.resize(warped_fake_B0_2)
#self.warped_fake_B0_resize = self.resize(warped_fake_B0)
#self.mutil_warped_fake_B0_tokens = self.netPreViT(self.warped_fake_B0_resize, self.atten_layers, get_tokens=True)
#self.mutil_fake_B0_2_tokens = self.netPreViT(self.warped_fake_B0_2_resize, self.atten_layers, get_tokens=True)
def compute_D_loss(self): #判别器还是没有改
"""Calculate GAN loss for the discriminator"""
def compute_D_loss(self):
"""Calculate GAN loss with Content-Aware Optimization"""
lambda_D_ViT = self.opt.lambda_D_ViT lambda_D_ViT = self.opt.lambda_D_ViT
fake_B0_tokens = self.mutil_fake_B0_tokens[0].detach()
fake_B1_tokens = self.mutil_fake_B1_tokens[0].detach() # 处理 real_B0 和 fake_B0
real_B0_tokens = self.mutil_real_B0_tokens[0] real_B0_tokens = self.mutil_real_B0_tokens[0]
pred_real0 = self.netD_ViT(real_B0_tokens)
fake_B0_tokens = self.mutil_fake_B0_tokens[0].detach()
pred_fake0 = self.netD_ViT(fake_B0_tokens)
loss_real0, self.weight_real0 = self.cao( pred_real0, True)
loss_fake0, self.weight_fake0 = self.cao( pred_fake0, False)
# 处理 real_B1 和 fake_B1
real_B1_tokens = self.mutil_real_B1_tokens[0] real_B1_tokens = self.mutil_real_B1_tokens[0]
pred_real1 = self.netD_ViT(real_B1_tokens)
fake_B1_tokens = self.mutil_fake_B1_tokens[0].detach()
pre_fake0_ViT = self.netD_ViT(fake_B0_tokens) pred_fake1 = self.netD_ViT(fake_B1_tokens)
pre_fake1_ViT = self.netD_ViT(fake_B1_tokens)
loss_real1, self.weight_real1 = self.cao( pred_real1, True)
self.loss_D_fake_ViT = (self.criterionGAN(pre_fake0_ViT, False).mean() + self.criterionGAN(pre_fake1_ViT, False).mean()) * 0.5 * lambda_D_ViT loss_fake1, self.weight_fake1 = self.cao( pred_fake1, False)
pred_real0_ViT = self.netD_ViT(real_B0_tokens) # 综合损失
pred_real1_ViT = self.netD_ViT(real_B1_tokens) self.loss_D_ViT = (loss_real0 + loss_fake0 + loss_real1 + loss_fake1) * 0.25 * lambda_D_ViT
self.loss_D_real_ViT = (self.criterionGAN(pred_real0_ViT, True).mean() + self.criterionGAN(pred_real1_ViT, True).mean()) * 0.5 * lambda_D_ViT
self.loss_D_ViT = (self.loss_D_fake_ViT + self.loss_D_real_ViT) * 0.5
return self.loss_D_ViT return self.loss_D_ViT
def compute_E_loss(self):
"""计算判别器 E 的损失"""
print(f'resl_A_noisy: {self.real_A_noisy.shape} \n fake_B0: {self.fake_B0.shape}')
XtXt_1 = torch.cat([self.real_A_noisy, self.fake_B0.detach()], dim=1)
XtXt_2 = torch.cat([self.real_A_noisy2, self.fake_B1.detach()], dim=1)
temp = torch.logsumexp(self.netE(XtXt_1, self.time, XtXt_2).reshape(-1), dim=0).mean()
self.loss_E = -self.netE(XtXt_1, self.time, XtXt_1).mean() + temp + temp**2
return self.loss_E
def compute_G_loss(self): def compute_G_loss(self):
"""计算生成器的 GAN 损失""" """计算生成器的损失"""
# 初始化总损失
self.loss_G_GAN = 0.0
self.loss_ctn = 0.0
self.loss_global = 0.0
self.loss_spatial = 0.0
# 计算 CTN 损失
if self.opt.lambda_ctn > 0.0:
# 生成光流图(使用判别器的权重)
self.f_content0 = self.ctn(self.weight_fake0.detach())
self.f_content1 = self.ctn(self.weight_fake1.detach())
# 变换后的图片
self.warped_real_A0 = warp(self.real_A0, self.f_content0)
self.warped_real_A1 = warp(self.real_A1, self.f_content1)
self.warped_fake_B0 = warp(self.fake_B0, self.f_content0)
self.warped_fake_B1 = warp(self.fake_B1, self.f_content1)
# 第二次生成
self.warped_fake_B0_2 = self.netG(self.warped_real_A0)
self.warped_fake_B1_2 = self.netG(self.warped_real_A1)
# 计算 L2 损失
self.loss_ctn0 = F.mse_loss(self.warped_fake_B0_2, self.warped_fake_B0)
self.loss_ctn1 = F.mse_loss(self.warped_fake_B1_2, self.warped_fake_B1)
self.loss_ctn = (self.loss_ctn0 + self.loss_ctn1) * 0.5
# 计算 GAN 损失(引入 ContentAwareOptimization
if self.opt.lambda_GAN > 0.0: if self.opt.lambda_GAN > 0.0:
pred_fake = self.netD_ViT(self.mutil_fake_B0_tokens[0])
self.loss_G_GAN = self.criterionGAN(pred_fake, True).mean() * self.opt.lambda_GAN pred_fake0 = self.netD_ViT(self.mutil_fake_B0_tokens[0])
pred_fake1 = self.netD_ViT(self.mutil_fake_B1_tokens[0])
self.loss_G_GAN0 = self.criterionGAN(pred_fake0, True).mean()
self.loss_G_GAN1 = self.criterionGAN(pred_fake1, True).mean()
self.loss_G_GAN = (self.loss_G_GAN0 + self.loss_G_GAN1)*0.5
else: else:
self.loss_G_GAN = 0.0 self.loss_G_GAN = 0.0
self.loss_SB = 0
if self.opt.lambda_SB > 0.0:
XtXt_1 = torch.cat([self.real_A_noisy, self.fake_B0], dim=1)
XtXt_2 = torch.cat([self.real_A_noisy2, self.fake_B1], dim=1)
bs = self.opt.batch_size
# eq.9 if self.opt.lambda_global or self.opt.lambda_spatial > 0.0:
ET_XY = self.netE(XtXt_1, self.time, XtXt_1).mean() - torch.logsumexp(self.netE(XtXt_1, self.time, XtXt_2).reshape(-1), dim=0) self.loss_global, self.loss_spatial = self.calculate_attention_loss()
self.loss_SB = -(self.opt.num_timesteps - self.time[0]) / self.opt.num_timesteps * self.opt.tau * ET_XY
self.loss_SB += self.opt.tau * torch.mean((self.real_A_noisy - self.fake_B0) ** 2) # 总损失
self.loss_G = self.opt.lambda_GAN * self.loss_G_GAN + \
if self.opt.lambda_global > 0.0: self.opt.lambda_ctn * self.loss_ctn + \
loss_global = self.calculate_similarity(self.real_A0, self.fake_B0) + self.calculate_similarity(self.real_A1, self.fake_B1) self.opt.lambda_global * self.loss_global + \
loss_global *= 0.5 self.opt.lambda_spatial * self.loss_spatial
else:
loss_global = 0.0
self.l2_loss = 0.0
#if self.opt.lambda_ctn > 0.0:
# wapped_fake_B = warp(self.fake_B, self.f_content) # use updated self.f_content
# self.l2_loss = F.mse_loss(self.fake_B_2, wapped_fake_B) # complete the loss calculation
self.loss_G = self.loss_G_GAN + self.opt.lambda_SB * self.loss_SB + self.opt.lambda_ctn * self.l2_loss + loss_global * self.opt.lambda_global
return self.loss_G return self.loss_G
def calculate_attention_loss(self): def calculate_attention_loss(self):
n_layers = len(self.atten_layers) n_layers = len(self.atten_layers)
mutil_real_A0_tokens = self.mutil_real_A0_tokens mutil_real_A0_tokens = self.mutil_real_A0_tokens
@ -604,20 +407,19 @@ class RomaUnsbModel(BaseModel):
local_id = np.random.permutation(tokens_cnt) local_id = np.random.permutation(tokens_cnt)
local_id = local_id[:int(min(local_nums, tokens_cnt))] local_id = local_id[:int(min(local_nums, tokens_cnt))]
mutil_real_A0_local_tokens = self.netPreViT(self.resize(self.real_A0), self.atten_layers, get_tokens=True, local_id=local_id, side_length=self.opt.side_length) mutil_real_A0_local_tokens = self.netPreViT(self.real_A0_resize, self.atten_layers, get_tokens=True, local_id=local_id, side_length = self.opt.side_length)
mutil_real_A1_local_tokens = self.netPreViT(self.resize(self.real_A1), self.atten_layers, get_tokens=True, local_id=local_id, side_length=self.opt.side_length) mutil_real_A1_local_tokens = self.netPreViT(self.real_A1_resize, self.atten_layers, get_tokens=True, local_id=local_id, side_length = self.opt.side_length)
mutil_fake_B0_local_tokens = self.netPreViT(self.resize(self.fake_B0), self.atten_layers, get_tokens=True, local_id=local_id, side_length=self.opt.side_length) mutil_fake_B0_local_tokens = self.netPreViT(self.fake_B0_resize, self.atten_layers, get_tokens=True, local_id=local_id, side_length = self.opt.side_length)
mutil_fake_B1_local_tokens = self.netPreViT(self.resize(self.fake_B1), self.atten_layers, get_tokens=True, local_id=local_id, side_length=self.opt.side_length) mutil_fake_B1_local_tokens = self.netPreViT(self.fake_B1_resize, self.atten_layers, get_tokens=True, local_id=local_id, side_length = self.opt.side_length)
loss_spatial = self.calculate_similarity(mutil_real_A0_local_tokens, mutil_fake_B0_local_tokens) + self.calculate_similarity(mutil_real_A1_local_tokens, mutil_fake_B1_local_tokens) loss_spatial = self.calculate_similarity(mutil_real_A0_local_tokens, mutil_fake_B0_local_tokens) + self.calculate_similarity(mutil_real_A1_local_tokens, mutil_fake_B1_local_tokens)
loss_spatial *= 0.5 loss_spatial *= 0.5
else: else:
loss_spatial = 0.0 loss_spatial = 0.0
return loss_global , loss_spatial
return loss_global * self.opt.lambda_global, loss_spatial * self.opt.lambda_spatial
def calculate_similarity(self, mutil_src_tokens, mutil_tgt_tokens): def calculate_similarity(self, mutil_src_tokens, mutil_tgt_tokens):
loss = 0.0 loss = 0.0
n_layers = len(self.atten_layers) n_layers = len(self.atten_layers)
@ -631,5 +433,3 @@ class RomaUnsbModel(BaseModel):
loss = loss / n_layers loss = loss / n_layers
return loss return loss

View File

@ -0,0 +1,391 @@
import numpy as np
import math
import timm
import torch
import torchvision.models as models
import torch.nn as nn
import torch.nn.functional as F
from torchvision.transforms import GaussianBlur
from .base_model import BaseModel
from . import networks
from .patchnce import PatchNCELoss
import util.util as util
from torchvision.transforms import transforms as tfs
def warp(image, flow): #warp操作
"""
基于光流的图像变形函数
Args:
image: [B, C, H, W] 输入图像
flow: [B, 2, H, W] 光流场(x/y方向位移)
Returns:
warped: [B, C, H, W] 变形后的图像
"""
B, C, H, W = image.shape
# 生成网格坐标
grid_x, grid_y = torch.meshgrid(torch.arange(W), torch.arange(H))
grid = torch.stack((grid_x, grid_y), dim=0).float().to(image.device) # [2,H,W]
grid = grid.unsqueeze(0).repeat(B,1,1,1) # [B,2,H,W]
# 应用光流位移(归一化到[-1,1])
new_grid = grid + flow
new_grid[:,0,:,:] = 2.0 * new_grid[:,0,:,:] / (W-1) - 1.0 # x方向
new_grid[:,1,:,:] = 2.0 * new_grid[:,1,:,:] / (H-1) - 1.0 # y方向
new_grid = new_grid.permute(0,2,3,1) # [B,H,W,2]
# 双线性插值
return F.grid_sample(image, new_grid, align_corners=True)
class ContentAwareOptimization(nn.Module):
def __init__(self, lambda_inc=2.0, eta_ratio=0.4):
super().__init__()
self.lambda_inc = lambda_inc # 控制内容丰富区域的权重增量
self.eta_ratio = eta_ratio # 选择内容丰富区域的比例
self.criterionGAN = networks.GANLoss('lsgan').cuda() # 使用 LSGAN 损失
def compute_cosine_similarity(self, grad_patch, grad_mean):
"""
计算每个 token 梯度与整体平均梯度的余弦相似度
Args:
grad_patch: [B, N, D]每个 token 的梯度来自 scores
grad_mean: [B, D]整体平均梯度
Returns:
cosine: [B, N]余弦相似度 δ_i
"""
# 对每个 token 计算余弦相似度
cosine = F.cosine_similarity(grad_patch, grad_mean.unsqueeze(1), dim=2) # [B, N]
return cosine
def generate_weight_map(self, cosine):
"""
根据余弦相似度生成权重图
Args:
cosine: [B, N]余弦相似度 δ_i
Returns:
weights: [B, N]权重图 w_i
"""
B, N = cosine.shape
k = int(self.eta_ratio * N) # 选择 eta_ratio 比例的 token
_, indices = torch.topk(-cosine, k, dim=1) # 选择偏离最大的 k 个 token
weights = torch.ones_like(cosine)
for b in range(B):
selected_cosine = cosine[b, indices[b]]
weights[b, indices[b]] = self.lambda_inc / (torch.exp(torch.abs(selected_cosine)) + 1e-6)
return weights
def forward(self, scores, target):
"""
前向传播计算加权后的 GAN 损失
Args:
scores: [B, N, D]判别器的预测得分
target: 目标标签True False
Returns:
weighted_loss: 加权后的 GAN 损失
weight: 权重图 [B, N]
"""
# 计算原始 GAN 损失(假设 criterionGAN 返回 [B, N] 的损失分布)
loss = self.criterionGAN(scores, target)
# 捕获 scores 的梯度,形状为 [B, N, D]
grad_scores = torch.autograd.grad(loss, scores, retain_graph=True)[0]
# 计算整体平均梯度(在 N 维度上求均值)
grad_mean = torch.mean(grad_scores, dim=1) # [B, D]
# 计算余弦相似度 δ_i
cosine = self.compute_cosine_similarity(grad_scores, grad_mean) # [B, N]
# 生成权重图 w_i
weight = self.generate_weight_map(cosine) # [B, N]
# 计算加权后的 GAN 损失
weighted_loss = torch.mean(weight * self.criterionGAN(scores, target))
return weighted_loss, weight
class ContentAwareTemporalNorm(nn.Module):
def __init__(self, gamma_stride=0.1, kernel_size=21, sigma=5.0):
super().__init__()
self.gamma_stride = gamma_stride # 控制整体运动幅度
self.smoother = GaussianBlur(kernel_size, sigma=sigma) # 高斯平滑层
def upsample_weight_map(self, weight_patch, target_size=(256, 256)):
# weight_patch: [B, 1, H, W] 来自转换后的 weight_map
weight_full = F.interpolate(
weight_patch,
size=target_size,
mode='bilinear', # 或 'nearest',根据需求选择
align_corners=False
)
return weight_full
def forward(self, weight_map):
"""
生成内容感知光流
Args:
weight_map: [B, N] 权重图(来自 ContentAwareOptimization)其中 N=576
Returns:
F_content: [B, 2, H, W] 生成的光流场(x/y方向位移)
"""
B = weight_map.shape[0]
N = weight_map.shape[1]
# 假设 N 为完全平方数,计算边长(例如 576 -> 24x24
side = int(math.sqrt(N))
weight_map_2d = weight_map.view(B, 1, side, side) # 转换为 [B, 1, side, side]
# 上采样权重图到全分辨率
weight_full = self.upsample_weight_map(weight_map_2d) # [B, 1, 256, 256](例如)
# 归一化权重图L1归一化
weight_norm = F.normalize(weight_full, p=1, dim=(2,3))
# 生成高斯噪声
B, _, H, W = weight_norm.shape
z = torch.randn(B, 2, H, W, device=weight_norm.device)
# 合成基础光流
weight_expanded = weight_norm.expand(-1, 2, -1, -1)
F_raw = self.gamma_stride * weight_expanded * z
# 平滑处理
F_smooth = self.smoother(F_raw)
# 动态范围调整
F_content = torch.tanh(F_smooth)
return F_content
class RomaUnsbSingleModel(BaseModel):
@staticmethod
def modify_commandline_options(parser, is_train=True):
"""配置 CTNx 模型的特定选项"""
parser.add_argument('--lambda_GAN', type=float, default=1.0, help='weight for GAN loss: GAN(G(X))')
parser.add_argument('--lambda_ctn', type=float, default=1.0, help='weight for content-aware temporal norm')
parser.add_argument('--lambda_D_ViT', type=float, default=1.0, help='weight for discriminator')
parser.add_argument('--lambda_global', type=float, default=1.0, help='weight for Global Structural Consistency')
parser.add_argument('--lambda_spatial', type=float, default=1.0, help='weight for Local Structural Consistency')
parser.add_argument('--lambda_inc', type=float, default=1.0, help='incremental weight for content-aware optimization')
parser.add_argument('--local_nums', type=int, default=64, help='number of local patches')
parser.add_argument('--side_length', type=int, default=7)
parser.add_argument('--nce_layers', type=str, default='0,4,8,12,16', help='compute NCE loss on which layers')
parser.add_argument('--eta_ratio', type=float, default=0.4, help='ratio of content-rich regions')
parser.add_argument('--gamma_stride', type=float, default=20, help='ratio of stride for computing the similarity matrix')
parser.add_argument('--atten_layers', type=str, default='5', help='compute Cross-Similarity on which layers')
parser.add_argument('--tau', type=float, default=0.01, help='Entropy parameter')
parser.add_argument('--num_timesteps', type=int, default=5, help='# of discrim filters in the first conv layer')
parser.add_argument('--n_mlp', type=int, default=3, help='only used if netD==n_layers')
opt, _ = parser.parse_known_args()
return parser
def __init__(self, opt):
BaseModel.__init__(self, opt)
self.loss_names = ['G_GAN', 'D_ViT', 'G', 'global', 'spatial','ctn']
self.visual_names = ['real_A', 'fake_B', 'real_B']
self.atten_layers = [int(i) for i in self.opt.atten_layers.split(',')]
if self.isTrain:
self.model_names = ['G', 'D_ViT']
else: # during test time, only load G
self.model_names = ['G']
# define networks (both generator and discriminator)
self.netG = networks.define_G(opt.input_nc, opt.output_nc, opt.ngf, opt.netG, opt.normG, not opt.no_dropout, opt.init_type, opt.init_gain, opt.no_antialias, opt.no_antialias_up, self.gpu_ids, opt)
if self.isTrain:
self.netD_ViT = networks.MLPDiscriminator().to(self.device)
# self.netPreViT = timm.create_model("vit_base_patch32_384",pretrained=True).to(self.device)
self.netPreViT = timm.create_model("vit_base_patch16_384",pretrained=True).to(self.device)
self.resize = tfs.Resize(size=(384,384))
# self.resize = tfs.Resize(size=(224, 224))
# define loss functions
self.criterionGAN = networks.GANLoss(opt.gan_mode).to(self.device)
self.criterionL1 = torch.nn.L1Loss().to(self.device)
self.optimizer_G = torch.optim.Adam(self.netG.parameters(), lr=opt.lr, betas=(opt.beta1, opt.beta2))
self.optimizer_D_ViT = torch.optim.Adam(self.netD_ViT.parameters(), lr=opt.lr, betas=(opt.beta1, opt.beta2))
self.optimizers.append(self.optimizer_G)
self.optimizers.append(self.optimizer_D_ViT)
self.cao = ContentAwareOptimization(opt.lambda_inc, opt.eta_ratio) #损失函数
self.ctn = ContentAwareTemporalNorm() #生成的伪光流
def data_dependent_initialize(self, data):
"""
The feature network netF is defined in terms of the shape of the intermediate, extracted
features of the encoder portion of netG. Because of this, the weights of netF are
initialized at the first feedforward pass with some input images.
Please also see PatchSampleF.create_mlp(), which is called at the first forward() call.
"""
pass
def optimize_parameters(self):
# forward
self.forward()
# update D
self.set_requires_grad(self.netD_ViT, True)
self.optimizer_D_ViT.zero_grad()
self.loss_D = self.compute_D_loss()
self.loss_D.backward()
self.optimizer_D_ViT.step()
# update G
self.set_requires_grad(self.netD_ViT, False)
self.optimizer_G.zero_grad()
self.loss_G = self.compute_G_loss()
self.loss_G.backward()
self.optimizer_G.step()
def set_input(self, input):
"""Unpack input data from the dataloader and perform necessary pre-processing steps.
Parameters:
input (dict): include the data itself and its metadata information.
The option 'direction' can be used to swap domain A and domain B.
"""
AtoB = self.opt.direction == 'AtoB'
self.real_A = input['A' if AtoB else 'B'].to(self.device)
self.real_B = input['B' if AtoB else 'A'].to(self.device)
self.image_paths = input['A_paths' if AtoB else 'B_paths']
def forward(self):
"""Run forward pass; called by both functions <optimize_parameters> and <test>."""
self.fake_B = self.netG(self.real_A)
if self.opt.isTrain:
real_A = self.real_A
real_B = self.real_B
fake_B = self.fake_B
self.real_A_resize = self.resize(real_A)
real_B = self.resize(real_B)
self.fake_B_resize = self.resize(fake_B)
self.mutil_real_A_tokens = self.netPreViT(self.real_A_resize, self.atten_layers, get_tokens=True)
self.mutil_real_B_tokens = self.netPreViT(real_B, self.atten_layers, get_tokens=True)
self.mutil_fake_B_tokens = self.netPreViT(self.fake_B_resize, self.atten_layers, get_tokens=True)
def compute_D_loss(self):
"""Calculate GAN loss for the discriminator"""
lambda_D_ViT = self.opt.lambda_D_ViT
fake_B_tokens = self.mutil_fake_B_tokens[0].detach()
real_B_tokens = self.mutil_real_B_tokens[0]
pre_fake_ViT = self.netD_ViT(fake_B_tokens)
pred_real_ViT = self.netD_ViT(real_B_tokens)
self.loss_D_real_ViT , self.weight_real = self.cao(pred_real_ViT, True)
self.loss_D_fake_ViT , self.weight_fake = self.cao(pre_fake_ViT, False)
self.loss_D_ViT = (self.loss_D_fake_ViT + self.loss_D_real_ViT) * 0.5* lambda_D_ViT
return self.loss_D_ViT
def compute_G_loss(self):
if self.opt.lambda_ctn > 0.0:
# 生成光流图(使用判别器的权重)
self.f_content = self.ctn(self.weight_fake.detach())
# 变换后的图片
self.warped_real_A = warp(self.real_A, self.f_content)
self.warped_fake_B = warp(self.fake_B, self.f_content)
# 第二次生成
self.warped_fake_B2 = self.netG(self.warped_real_A)
# 计算损失
self.loss_ctn = self.criterionL1(self.warped_fake_B, self.warped_fake_B2) * self.opt.lambda_ctn
else:
self.loss_ctn = 0.0
# if self.opt.lambda_GAN > 0.0:
# fake_B_tokens = self.mutil_fake_B_tokens[0]
# pred_fake_ViT = self.netD_ViT(fake_B_tokens)
# self.loss_G_GAN = self.criterionGAN(pred_fake_ViT, True) * self.opt.lambda_GAN
# else:
# self.loss_G_GAN = 0.0
if self.opt.lambda_GAN > 0.0:
fake_B_tokens = self.mutil_fake_B_tokens[0]
pred_fake_ViT = self.netD_ViT(fake_B_tokens)
self.loss_G_fake_ViT , self.weight_real = self.cao(pred_fake_ViT, True)
self.loss_G_GAN = self.loss_G_fake_ViT * self.opt.lambda_GAN
else:
self.loss_G_GAN = 0.0
if self.opt.lambda_global > 0.0 or self.opt.lambda_spatial > 0.0:
self.loss_global, self.loss_spatial = self.calculate_attention_loss()
else:
self.loss_global, self.loss_spatial = 0.0, 0.0
self.loss_G = self.loss_G_GAN + self.loss_global + self.loss_spatial + self.loss_ctn
return self.loss_G
def calculate_attention_loss(self):
n_layers = len(self.atten_layers)
mutil_real_A_tokens = self.mutil_real_A_tokens
mutil_fake_B_tokens = self.mutil_fake_B_tokens
if self.opt.lambda_global > 0.0:
loss_global = self.calculate_similarity(mutil_real_A_tokens, mutil_fake_B_tokens)
else:
loss_global = 0.0
if self.opt.lambda_spatial > 0.0:
loss_spatial = 0.0
local_nums = self.opt.local_nums
tokens_cnt = 576
local_id = np.random.permutation(tokens_cnt)
local_id = local_id[:int(min(local_nums, tokens_cnt))]
mutil_real_A_local_tokens = self.netPreViT(self.real_A_resize, self.atten_layers, get_tokens=True, local_id=local_id, side_length = self.opt.side_length)
mutil_fake_B_local_tokens = self.netPreViT(self.fake_B_resize, self.atten_layers, get_tokens=True, local_id=local_id, side_length = self.opt.side_length)
loss_spatial = self.calculate_similarity(mutil_real_A_local_tokens, mutil_fake_B_local_tokens)
else:
loss_spatial = 0.0
return loss_global * self.opt.lambda_global, loss_spatial * self.opt.lambda_spatial
def calculate_similarity(self, mutil_src_tokens, mutil_tgt_tokens):
loss = 0.0
n_layers = len(self.atten_layers)
for src_tokens, tgt_tokens in zip(mutil_src_tokens, mutil_tgt_tokens):
src_tgt = src_tokens.bmm(tgt_tokens.permute(0,2,1))
tgt_src = tgt_tokens.bmm(src_tokens.permute(0,2,1))
cos_dis_global = F.cosine_similarity(src_tgt, tgt_src, dim=-1)
loss += self.criterionL1(torch.ones_like(cos_dis_global), cos_dis_global).mean()
loss = loss / n_layers
return loss

View File

@ -36,7 +36,7 @@ class BaseOptions():
parser.add_argument('--ngf', type=int, default=64, help='# of gen filters in the last conv layer') parser.add_argument('--ngf', type=int, default=64, help='# of gen filters in the last conv layer')
parser.add_argument('--ndf', type=int, default=64, help='# of discrim filters in the first conv layer') parser.add_argument('--ndf', type=int, default=64, help='# of discrim filters in the first conv layer')
parser.add_argument('--netD', type=str, default='basic_cond', choices=['basic_cond', 'basic', 'n_layers', 'pixel', 'patch', 'tilestylegan2', 'stylegan2'], help='specify discriminator architecture. The basic model is a 70x70 PatchGAN. n_layers allows you to specify the layers in the discriminator') parser.add_argument('--netD', type=str, default='basic_cond', choices=['basic_cond', 'basic', 'n_layers', 'pixel', 'patch', 'tilestylegan2', 'stylegan2'], help='specify discriminator architecture. The basic model is a 70x70 PatchGAN. n_layers allows you to specify the layers in the discriminator')
parser.add_argument('--netG', type=str, default='resnet_9blocks_cond', choices=['resnet_9blocks','resnet_9blocks_mask', 'resnet_6blocks', 'unet_256', 'unet_128', 'stylegan2', 'smallstylegan2', 'resnet_cat', 'resnet_9blocks_cond'], help='specify generator architecture') parser.add_argument('--netG', type=str, default='resnet_9blocks', choices=['resnet_9blocks','resnet_9blocks_mask', 'resnet_6blocks', 'unet_256', 'unet_128', 'stylegan2', 'smallstylegan2', 'resnet_cat', 'resnet_9blocks_cond'], help='specify generator architecture')
parser.add_argument('--n_layers_D', type=int, default=3, help='only used if netD==n_layers') parser.add_argument('--n_layers_D', type=int, default=3, help='only used if netD==n_layers')
parser.add_argument('--normG', type=str, default='instance', choices=['instance', 'batch', 'none'], help='instance normalization or batch normalization for G') parser.add_argument('--normG', type=str, default='instance', choices=['instance', 'batch', 'none'], help='instance normalization or batch normalization for G')
parser.add_argument('--normD', type=str, default='instance', choices=['instance', 'batch', 'none'], help='instance normalization or batch normalization for D') parser.add_argument('--normD', type=str, default='instance', choices=['instance', 'batch', 'none'], help='instance normalization or batch normalization for D')

View File

@ -31,7 +31,7 @@ class TrainOptions(BaseOptions):
parser.add_argument('--epoch_count', type=int, default=1, help='the starting epoch count, we save the model by <epoch_count>, <epoch_count>+<save_latest_freq>, ...') parser.add_argument('--epoch_count', type=int, default=1, help='the starting epoch count, we save the model by <epoch_count>, <epoch_count>+<save_latest_freq>, ...')
parser.add_argument('--phase', type=str, default='train', help='train, val, test, etc') parser.add_argument('--phase', type=str, default='train', help='train, val, test, etc')
parser.add_argument('--pretrained_name', type=str, default=None, help='resume training from another checkpoint') parser.add_argument('--pretrained_name', type=str, default=None, help='resume training from another checkpoint')
# training parameters # training parameters
parser.add_argument('--n_epochs', type=int, default=100, help='number of epochs with the initial learning rate') parser.add_argument('--n_epochs', type=int, default=100, help='number of epochs with the initial learning rate')
parser.add_argument('--n_epochs_decay', type=int, default=100, help='number of epochs to linearly decay learning rate to zero') parser.add_argument('--n_epochs_decay', type=int, default=100, help='number of epochs to linearly decay learning rate to zero')

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 135 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 135 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 135 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Some files were not shown because too many files have changed in this diff Show More