Theses - Undergraduate
Permanent URI for this collection
Browse
Recent Submissions
- ItemOpen AccessPredicting Characteristics Associated with Breast Cancer Survival using Multiple Machine Learning Approaches(North South University, 2021-12-30) Mohammad Nazmul Haque; Mohammad Monirujjaman Khan; 1712859042Breast cancer is one of the most commonly diagnosed female disorders globally. Numerous studies have been conducted to predict survival markers, although the majority of these analyses were conducted using simple statistical techniques. In lieu of that, this research employed machine learning approaches to develop models for identifying and visualizing relevant prognostic indications of breast cancer survival rates. A comprehensive hospital-based breast cancer dataset was collected from the National Cancer Institute's SEER Program's November 2017 update, which offers population-based cancer statistics. The dataset included female patients diagnosed between 2006 and 2010 with infiltrating duct and lobular carcinoma breast cancer (SEER primary cites recode NOS histology codes 8522/3). The dataset included nine predictor factors and one predictor variable that were linked to the patients' survival status (alive or dead). To identify important prognostic markers associated with breast cancer survival rates, prediction models were constructed using K-nearest neighbor, decision tree, gradient boosting, random forest, adaboost, logistic regression, voting classifier, and support vector machine. All methods yielded close results in terms of model accuracy and calibration measures, with the lowest achieved from Logistic Regression (accuracy = 80.57 percent) and the greatest acquired from random forest (accuracy = 94.64 percent). Notably, the multiple machine learning algorithms utilized in this research achieved high accuracy, suggesting that these approaches might be used as alternative prognostic tools in breast cancer survival studies, especially in the Asian area.
- ItemOpen AccessDirect Industrial Experiment and Theoretical Studies for Fuel Level Monitoring of Vehicles(North South University, 2019-10-30) Tofawel Ahmad Shahin; Md. Towhidur Rahman; Md. Shamim Reza; Md. Farhad Ibna Alam Sajib; Mahdy Rahman Chowdhury; 1521336045; 1512526043; 1511050043; 1511420043Nowadays, actual record of fuel filled and fuel consumption in construction vehicles is not maintained, it results enormous financial loss. Advances in technologies and availability of economical open source hardware systems are setting a new trend in system designing. Use of technologies like Internet of Things (IoT) can ease the process of data collection and analysis. The main objective of the project is to describe a system which can monitor or track the location of the construction vehicles from a centralized place. System design will be generalized for monitoring different parameters like location, engine run hour, fuel consumption and many more. Proposed system uses open source sensor, controller in the back engine that support GSM or GPRS module for data transfer from remote location and GPS tracker to track the current location of those vehicles. There is huge amount of fuel theft usually happens from the vehicles nowadays. So, if the owners want to monitor the conditions of the vehicle by sitting at home such as to know the fuel level, to know the speed of the vehicle, then they can use this device. The device we wanted to make can solve those problems. In the device we used an Arduino mega which is connected with an accelerometer sensor (MPU 6050) and an SD Card module. In the Arduino mega we defined an analog pin(A0) and a digital pin (30) which is connected with the fuel line and the engine line of the particular vehicle. Then the ADC value and the values from the sensor we get from the Arduino is actually stored in the SD Card. After analysing the data, the fuel can be found. Our device is extremely cost effective, so the owners of any particular vehicle or any vehicle company can buy this device at a very lower price compared to the other fuel level monitoring device. It also improves the accuracy and the sensor we used in the device can be used in many different purposes.
- ItemOpen AccessNSU Canteen Automation(North South University, 2019-04-30) Mohammad Adib Khan; Rifat Arefin Badhon; Sarwat Islam Dipanzan; Mirza Mohammad Lutfe Elahi; 1430420042; 1511738042; 1510117042North South University’s (NSU) canteen has seen a major revamp in both appearance and in the way the cafeteria operates. Over the years it has been the go-to place for students and teachers alike for a quick meal or refreshments in between hectic schedules. The recent changes, however, has affected the average waiting times substantially. The new canteen has adopted many new technologies such as electronic displays and Point of Sale (POS) systems for completing payments which is more advanced and offers credibility. But this creates an additional bottleneck in the total time spent on the actual service of ”getting the food”. The current mode of operation relies on having to stand in long queues for payments and then taking the payment slip (receipt) to another terminal to receive the food. Since the first queue builds up during peak hours, it creates a backlog of customers waiting in frustration. This is not only inefficient but time-consuming as well. This has affected a large number of daily customers for the cafeteria as time is scarce in the daily lives of academics, students and teachers alike. The solution we have devised is to offload the manual payment process to an online payment system using a payment processor hosted on a mobile device using a web application and Android. This will enable users to order and pay inside the canteen without having to resort to long queues. Users can use their bank accounts/cards for transferring funds to the application (canteen - account) which can be used for purchasing food inside the cafeteria. This will avoid the need for cash exchange at the counters, saving time and resources, reducing the need for additional queues.
- ItemOpen AccessDISTRACTED DRIVER DETECTION USING MACHINE LEARNING AND DEEP LEARNING(North South University, 2020-04-30) Ali Shahan; Enamul Haque; SHAHNEWAZ SIDDIQUE; 1520317042; 1531422642One of the main reason of most car accidents is distracted driving, which is the act of driving while engaging in other such as texting, talking on the phone, etc. activities. Activities of that nature distract the driver from paying attention to the road. These distractions in turn compromise the safety of the driver, passengers, bystanders and others in other vehicles. 7,796 deaths due to accidents in 2018, a report of Bangladesh Passengers Welfare Association said at least 7,796 people were killed and 15,980 were injured in 6,048 accidents. The United States Department of Transportation states that one in five car accidents are caused by distracted drivers. This work looks at various images of distracted drivers taken from people performing different actions, some of which can be deemed as distracting whilst behind the wheel of a car. A mixture of various neural network is used in order to more accurately predict what activity a driver is being distracted by.
- ItemOpen AccessFace Verification with Liveness detection using Deep Learning(North South University, 2022-12-31) Khaled Saifullah; Pervej Ahamed Joy; Shahin Arman Apu; Atiqur Rahman; 1632664642; 171307642; 1711280642Numerous advancements in the area of face detection and liveness analysis have been made to improve device security and attendance verification systems. Several methods use the 3D facial model to estimate the authenticity of the individual in front of it. Without using complex 3D imaging techniques or technology, our solution attempts to account for this difficulty. As a result, the system is indeed more cost-effective and convenient. It is divided into two sections, the first of which aids in face recognition and the second of which checks the liveness of the face. We employed a model based on Google's FaceNet Model in the first stage that trains a mapping from face images to compact Euclidean space distances, specifically relating to the similarity measure between the faces. Face Recognition may be simply accomplished using normal approaches using embeddings as feature vectors after the space has been created. We built a cascaded multi-task architecture for the second segment that separates specific elements from the face picture and then uses their relative displacements to verify for liveness. These separated characteristics were utilized to test the liveliness of a person's face by having them do a series of activities in a random order, such as body and facial twitches. The FaceNet based face detection model has an accuracy of 90%, and the facial features extraction model has 97% accuracy. After merging both models in real-time, we have an accuracy of 90%