School of Engineering

Graduates the engineering leaders of tomorrow...

The Case for Depthwise Separable Convolutions & Variational Dropout in YOLOv3

Deep learning algorithms have demonstrated remarkable performance in many sectors and have become one of the main foundations of modern computer-vision solutions. However, these algorithms often impose prohibitive levels of memory and computational overhead, especially in resource-constrained environments. In this study, we combine the state-of-the-art object-detection model YOLOv3 with depthwise separable convolutions in an attempt to bridge the gap between the superior accuracy of convolutional neural networks and the limited access to computational resources. We propose three lightweight variants of YOLOv3 by replacing the original network’s standard convolutions with depthwise separable convolutions at different strategic locations, and we evaluate their impacts on YOLOv3’s size, speed, and accuracy. We also explore variational dropout: a technique that finds individual and unbounded dropout rates for each neural network weight. Experiments on the PASCAL VOC benchmark dataset show promising results where variational dropout combined with the most efficient YOLOv3 variant led to an extremely sparse solution that reduces 95% of the baseline network’s parameters at little cost to its accuracy.  Report in pdf

10-20-SP-YOLOv3-JT-res.jpg


Copyright 1997–2021 Lebanese American University, Lebanon.
Contact LAU | Feedback