Implementasi Algoritma YOLO pada Aplikasi Pendeteksi Senjata Tajam di Android.

Christopher Nathanael Liunanda, Silvia Rostianingsih, Anita Nathania Purbowo

Abstract


Advances in computing power have exceeded traditional smartphone needs. Object detection requires high computational power. Tensorflow Lite can be used to run models quickly and easily to mobile devices. The YOLO network was used because of faster and more accurate performance than other similiar networks. The object to be identified are bladed weapons that are knifes and machetes. Bladed weapons are selected because of potential applications in the real world.

The trained models are YOLOv2-tiny, YOLOv3-tiny and YOLOv3. Transfer Learning is done to these models with Darknet so that YOLO can detect the desired weapon. Darknet model will be converted to Tensorflow Lite. Model testing is done by looking at some standard accuracy metrics such as precision, recall, mAP, and the average IoU. The model with the best performance will be installed in the Android application to detect bladed weapon objects knifes and machetes.

The test results show that the performance of the model is very dependent on the type of network, the number of datasets, and the shape of the dataset. YOLOv2-tiny produces the worst result with mAP of 55% and average IoU of 35%. The final accuracy for Tensorflow Lite Android model are 72.7% for YOLOv3 and 63.6% for YOLOv3-tiny. The YOLOv3-tiny network is suitable for real-time detection because of fast inference time (0.9 seconds).


Keywords


Object Detection;Darkflow;YOLO;Tensorflow;Tensorflow Lite

Full Text:

PDF

References


Alsing, Oscar. "Mobile Object Detection using TensorFlow Lite and Transfer Learning." 2018.

Bochkovskiy, A. (n.d.). AlexeyAB (Alexey) • GitHub. Retrieved July 13, 2020, from https://github.com/AlexeyAB Alsing, Oscar. "Mobile Object Detection using TensorFlow Lite and Transfer Learning." 2018.

Brandom, R. (2019, May 7). There are now 2.5 billion active Android devices. Retrieved November 27, 2019, from https://www.theverge.com/2019/5/7/18528297/google-io-2019-android-devices-play-store-total-number-statistic-keynote

Ignatov, Andrey, et al. “AI Benchmark: Running Deep Neural Networks on Android Smartphones.” Lecture Notes in Computer Science Computer Vision – ECCV 2018 Workshops, 2019, pp. 288–314., doi:10.1007/978-3-030-11021-5_19

Li, Jiangyun, et al. “Real-Time Detection of Steel Strip Surface Defects Based on Improved YOLO Detection Network.” IFAC-PapersOnLine, vol. 51, no. 21, 2018, pp. 76–81., doi:10.1016/j.ifacol.2018.09.412

Matiolański, A., Maksimova, A., & Dziech, A. (2015). CCTV object detection with fuzzy classification and image enhancement. Multimedia Tools and Applications, 75(17), 10513-10528. doi:10.1007/s11042-015-2697-z

Redmon, Joseph, et al. “You Only Look Once: Unified, Real-Time Object Detection.” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, doi:10.1109/cvpr.2016.91.

Yosinki, Jason, et al. “How Transferable Are Features in Deep Neural Networks?” Advances in Neural Information Processing Systems 27, 20


Refbacks

  • There are currently no refbacks.


Jurnal telah terindeks oleh :