SHONGKHEPON: Summarization Highlighting Only Necessary Graphics from videos using K-means in Hybrid with Euclidean distance and PCA Optimizing Neural networks

Journal Title
Journal Title
Journal ISSN
Volume Title
Video-based content consumption has risen significantly in recent years and has become the first choice as a pastime and prime source of learning and entertainment for many. Recently, we have also seen short-form videos and bite-size content skyrocket to popularity with the introduction of platforms like TikTok and YouTube Shorts because of how quickly we can glance at our screens and gain information in no time. However, people are generally quite busy and require help to hold their attention span while watching videos of longer lengths. Finding significant or instructive portions of the original video involves understanding its content and browsing the full video, which makes this a difficult task. Moreover, the variety of Internet video subjects—from family videos to documentaries—makes it difficult to summarize them because prior knowledge is seldom available. Hence, in this paper, we found ways to automate this process with machine learning and deep learning-based video summarization techniques. We approached the video summarization problem with a method that shortens videos from an input video file, highlighting only the key moments and removing repetitive scenes by implementing keyframe extraction. There can be different biases for keyframe extraction to determine which frames are essential. To solve that problem, we tried to use an unsupervised learning algorithm, K-means clustering. Clustering is a widespread technique in video summarization, and we have designed a succinct method by combining it with Neural networks and Principal Component Analysis (PCA) to heighten the accuracy by removing noise and redundancy while extracting keyframes.
Department Name
Electrical and Computer Engineering
North South University
Printed Thesis