Determining Physics Law From Moving Object
| dc.contributor.advisor | Mohammad Ashrafuzzaman Khan | |
| dc.contributor.author | Sharara Sartaj | |
| dc.contributor.author | Sanzar Rahman | |
| dc.contributor.id | 1620169042 | |
| dc.contributor.id | 1621555042 | |
| dc.coverage.department | Electrical and Computer Engineering | |
| dc.date.accessioned | 2025-08-28 | |
| dc.date.accessioned | 2025-08-28T06:34:26Z | |
| dc.date.available | 2025-08-28T06:34:26Z | |
| dc.date.issued | 2020 | |
| dc.description.abstract | Computer vision is an interdisciplinary scientific field in Deep Learning that deals with how visual images or videos can help computers achieve high-level understanding. It attempts to understand and automate tasks that the human visual system can do from the perspective of engineering. The soul of Computer Vision is Detection and for our Project we are trying to determine any physics law from a moving object using Deep Learning algorithms and techniques . As the initial step we have trained our Deep Learning Model to detect a sports ball from a picture and also from video. The accuracy we got was quite good but now we are trying to track the ball in continuous time frame without dropping it for a second. Our next steps will be getting the coordinate values of that ball from a video and then using those values we will try to compare it with curves that any law of physics follow which is the main goal of this Project. | |
| dc.description.degree | Undergraduate | |
| dc.identifier.cd | 600000641 | |
| dc.identifier.print-thesis | To be assigned | |
| dc.identifier.uri | https://repository.northsouth.edu/handle/123456789/1415 | |
| dc.language.iso | en | |
| dc.publisher | North South University | |
| dc.rights | © NSU Library | |
| dc.title | Determining Physics Law From Moving Object | |
| dc.type | Thesis | |
| oaire.citation.endPage | 52 | |
| oaire.citation.startPage | 1 |
Files
License bundle
1 - 1 of 1
Loading...
- Name:
- license.txt
- Size:
- 1.93 KB
- Format:
- Item-specific license agreed to upon submission
- Description: