Repository logo
  • Log In
    New user? Click here to register.Have you forgotten your password?
Repository logo
  • Collections
  • Browse
  • Log In
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "1811194042"

Now showing 1 - 1 of 1
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Open Access
    Photo-To-Cartoon Translation with Generative Adversarial Network
    (North South University, 2022) Istiaque Ahmed; Kazi Md. Ifthekhar Uddin; Rakibul Hasan; Riasat Khan; 1812420042; 1811019042; 1811194042
    Cartoons are a popular art form in our daily lives, and the ability to automatically create cartoon graphics from photos is highly desired. Cartoon images have a more vibrant and lively appearance than traditional naive pictures. This study aims to explain the process of translating real-world photos into cartoon-like images. While converting pictures to cartoons, there were a few difficulties, including fine hair edges, mismatched colors, and texture concerns. Photos were converted to cartoon-style images using generative adversarial networks (GAN). Various neural network-based GAN networks, DCGAN, CycleGAN, and AnimeGAN, have been applied in this work for cartoon conversion. Among them, CycleGAN performs better in transforming actual photographs into colorful, eye-catching cartoons. This project's approach is based on learning-based methodologies, which have lately gained popularity for stylizing images in artistic forms like painting. The results may be used to convert real-world photographs to high-quality cartoon graphics quickly. This project provides a web API that contains training weights derived from the models outlined below. Based on that API, we created a web app that converts real-world images into high-quality cartoon graphics for various cartoon styles. In these experiments, it outperforms state-of-the-art approaches to producing high-quality cartoon graphics from real-world photos. Numerical results show that the CycleGAN approach has the lowest training time per epoch and requires the minimum number of trainable parameters.

NSU IR. All rights reserved. © 2025 Powered by NSU Library

  • Cookie settings
  • NSU Library
  • NSU Home
  • Feedback