Photo-To-Cartoon Translation with Generative Adversarial Network

creativework.keywordsAnimeGAN, CycleGAN, Deep Convolutional GAN, Generative Adversarial network, Style-Transfer.
dc.contributor.advisorRiasat Khan
dc.contributor.authorIstiaque Ahmed
dc.contributor.authorKazi Md. Ifthekhar Uddin
dc.contributor.authorRakibul Hasan
dc.contributor.id1812420042
dc.contributor.id1811019042
dc.contributor.id1811194042
dc.coverage.departmentElectrical and Computer Engineering
dc.date.accessioned2024-05-19
dc.date.accessioned2024-05-19T06:43:51Z
dc.date.available2024-05-19T06:43:51Z
dc.date.issued2022
dc.description.abstractCartoons are a popular art form in our daily lives, and the ability to automatically create cartoon graphics from photos is highly desired. Cartoon images have a more vibrant and lively appearance than traditional naive pictures. This study aims to explain the process of translating real-world photos into cartoon-like images. While converting pictures to cartoons, there were a few difficulties, including fine hair edges, mismatched colors, and texture concerns. Photos were converted to cartoon-style images using generative adversarial networks (GAN). Various neural network-based GAN networks, DCGAN, CycleGAN, and AnimeGAN, have been applied in this work for cartoon conversion. Among them, CycleGAN performs better in transforming actual photographs into colorful, eye-catching cartoons. This project's approach is based on learning-based methodologies, which have lately gained popularity for stylizing images in artistic forms like painting. The results may be used to convert real-world photographs to high-quality cartoon graphics quickly. This project provides a web API that contains training weights derived from the models outlined below. Based on that API, we created a web app that converts real-world images into high-quality cartoon graphics for various cartoon styles. In these experiments, it outperforms state-of-the-art approaches to producing high-quality cartoon graphics from real-world photos. Numerical results show that the CycleGAN approach has the lowest training time per epoch and requires the minimum number of trainable parameters.
dc.description.degreeUndergraduate
dc.identifier.cd600000343
dc.identifier.print-thesisTo be assigned
dc.identifier.urihttps://repository.northsouth.edu/handle/123456789/759
dc.language.isoen
dc.publisherNorth South University
dc.titlePhoto-To-Cartoon Translation with Generative Adversarial Network
dc.typeProject
oaire.citation.endPage28
oaire.citation.startPage1
Files
Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
600000343-Abstract.pdf
Size:
186.73 KB
Format:
Adobe Portable Document Format
Description:
Loading...
Thumbnail Image
Name:
600000343.pdf
Size:
8.48 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.93 KB
Format:
Item-specific license agreed to upon submission
Description: