Convolutional neural networks (CNNs) are widely used for embroidery feature synthesis from images. However, they are still unable to predict diverse stitch types, which makes it difficult for the CNNs to effectively extract stitch features. In this paper, we propose a multi-stitch embroidery generative adversarial network (MSEmbGAN) that uses a region-aware texture generation sub-network to predict diverse embroidery features from images. To the best of our knowledge, our work is the first CNN-based generative adversarial network to succeed in this task. Our region-aware texture generation sub-network detects multiple regions in the input image using a stitch classifier and generates a stitch texture for each region based on its shape features. We also propose a structure generation network with a structural feature extractor, which helps achieve full image structural consistency by requiring the shape and color attributes of the output to closely resemble the input image. Because of the current lack of labeled embroidery image datasets, we provide a new multi-stitch embroidery dataset that is annotated with three single-stitch types and one multi-stitch type. Our dataset, which includes more than 30K high-quality multi-stitch embroidery images, more than 13K aligned content-embroidered images, and more than 17K unaligned images, is currently the largest embroidery dataset accessible, as far as we know. Quantitative and qualitative experimental results, including a qualitative user study, show that our MSEmbGAN outperforms current state-of-the-art embroidery synthesis and style-transfer methods on all evaluation indicators.
Introduction video
Related works comparison
Quantitative evaluation
{{column}}
|
|
---|---|
{{row[column]}}
|
{{row[column]}}
|
User study
{{column}}
|
|
---|---|
{{row[column]}}
|
{{row[column]}}
|
dataset
data_distribution
download