AICurious Logo

What is: Adaptive Content Generating and Preserving Network?

SourceTowards Photo-Realistic Virtual Try-On by Adaptively Generating-Preserving Image Content
Year2000
Data SourceCC BY-SA - https://paperswithcode.com

ACGPN, or Adaptive Content Generating and Preserving Network, is a generative adversarial network for virtual try-on clothing applications.

In Step I, the Semantic Generation Module (SGM) takes the target clothing image T_c\mathcal{T}\_{c}, the pose map M_p\mathcal{M}\_{p}, and the fused body part mask MF\mathcal{M}^{F} as the input to predict the semantic layout and to output the synthesized body part mask MS_ω\mathcal{M}^{S}\_{\omega} and the target clothing mask \mathcal{M}^{S\_{c}.

In Step II, the Clothes Warping Module (CWM) warps the target clothing image to TR_c\mathcal{T}^{R}\_{c} according to the predicted semantic layout, where a second-order difference constraint is introduced to stabilize the warping process.

In Steps III and IV, the Content Fusion Module (CFM) first produces the composited body part mask MC_ω\mathcal{M}^{C}\_{\omega} using the original clothing mask M_c\mathcal{M}\_{c}, the synthesized clothing mask MS_c\mathcal{M}^{S}\_{c}, the body part mask M_ω\mathcal{M}\_{\omega}, and the synthesized body part mask M_ωS\mathcal{M}\_{\omega}^{S}, and then exploits a fusion network to generate the try-on images IS\mathcal{I}^{S} by utilizing the information TR_c\mathcal{T}^{R}\_{c}, MS_c\mathcal{M}^{S}\_{c}, and the body part image I_ωI\_{\omega} from previous steps.