Recognising occluded multi-view actions using local nearest neighbour embedding

Yang Long, Fan Zhu, Ling Shao

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)


The recent advancement of multi-sensor technologies and algorithms has boosted significant progress to human action recognition systems, especially for dealing with realistic scenarios. However, partial occlusion, as a major obstacle in real-world applications, has not received sufficient attention in the action recognition community. In this paper, we extensively investigate how occlusion can be addressed by multi-view fusion. Specifically, we propose a robust representation called local nearest neighbour embedding (LNNE). We then extend the LNNE method to 3 multi-view fusion scenarios. Additionally, we provide detailed analysis of the proposed voting strategy from the boosting point of view. We evaluate our approach on both synthetic and realistic occluded databases, and the LNNE method outperforms the state-of-the-art approaches in all tested scenarios.
Original languageEnglish
Pages (from-to)36-45
JournalComputer Vision and Image Understanding
Publication statusPublished - Mar 2016


Dive into the research topics of 'Recognising occluded multi-view actions using local nearest neighbour embedding'. Together they form a unique fingerprint.

Cite this