dc.description.abstract | Image deraining is essential for autonomous driving systems which may require
clear images to calculate driving situations, such as ADAS. Recently, many research
works have been proposed for image deraining. However, none of them focuses on
the inference speed. In addition, most of them use synthetic raining images to
train their models, which leads to lower generalization for real raining images. To
address the inference speed, we propose Lightweight Channel Attention Block and
introduce fire modules into our model to reduce the number of parameters in the
model. In addition, several kernel filters are removed from subnetworks for the same
purpose. At the end, although our model generates deraining images with slightly
lower quality, our model deployed on NVIDIA AGX Xavier Developer Kit is 2.74
times faster than the baseline model. To address the model generalization issue,
we mix the real and synthetic raining images in the training set. We expect that
the our model can deal with deraining in real-time (i.e., exceeding 10 FPS) when
our model is deployed on the next generation embedded device of NVIDIA, Jetson
AGX Orin Developer Kit. | en_US |