Model-guided Multi-path Knowledge Aggregation

for Aerial Saliency Prediction

Kui Fu1
Jia Li1
Yu Zhang3
Hongze Shen1
Yonghong Tian2

1State Key Laboratory of Virtual Reality Technology and Systems, SCSE, Beihang University, Beijing, China

2Peng Cheng Laboratory, Shenzhen, China

3SenseTime Research, Beijing China

TIP 2020

System framework of baseline model MM-Net.

Abstract

As an emerging vision platform, a drone can look from many abnormal viewpoints which brings many new challenges into the classic vision task of video saliency prediction.To investigate these challenges, this paper proposes a large-scale video dataset for aerial saliency prediction, which consists of ground-truth salient object regions of 1,000 aerial videos,annotated by 24 subjects. To the best of our knowledge, it is the first large-scale video dataset that focuses on visual saliency prediction on drones. Based on this dataset, we propose a Model-guided Multi-path Network (MM-Net) that serves as a baseline model for aerial video saliency prediction. Inspired by the annotation process in eye-tracking experiments, MM-Net adopts multiple information paths, each of which is initialized under the guidance of a classic saliency model. After that, the visual saliency knowledge encoded in the most representative paths is selected and aggregated to improve the capability of MM-Net in predicting spatial saliency in aerial scenarios. Finally, these spatial predictions are adaptively combined with the temporal saliency predictions via a spatiotemporal optimization algorithm. Experimental results show that MM-Net outperforms ten state-of-the-art models in predicting aerial video saliency.

Qualitative comparisons

Representative frames of state-of-the-art models on AVS1K. (a) Video frame, (b) Ground truth, (c) HFT, (d) SP, (e) PNSP, (f) SSD, (g) LDS, (h) eDN, (i) iSEEL, (j) SalNet, (k) DVA, (l) STS, (m) MM-Net, (n) MM-Net-, (o) MM-Net+.

BibTex Citation

@article{fu2020model,
  title={Model-guided Multi-path Knowledge Aggregation for Aerial Saliency Prediction},
  author={Fu, Kui and Li, Jia and Zhang, Yu and Shen, Hongze and Tian, Yonghong},
  journal={IEEE Transactions on Image Processing},
  year={2020},
  publisher={IEEE}
}