Spatio-Temporal Laplacian Pyramid Coding for Action Recognition

Ling Shao, Xiantong Zhen, Dacheng Tao, Xuelong Li

Research output: Contribution to journalArticlepeer-review

189 Citations (Scopus)


We present a novel descriptor, called spatiotemporal Laplacian pyramid coding (STLPC), for holistic representation of human actions. In contrast to sparse representations based on detected local interest points, STLPC regards a video sequence as a whole with spatio-temporal features directly extracted from it, which prevents the loss of information in sparse representations. Through decomposing each sequence into a set of band-pass-filtered components, the proposed pyramid model localizes features residing at different scales, and therefore is able to effectively encode the motion information of actions. To make features further invariant and resistant to distortions as well as noise, a bank of 3-D Gabor filters is applied to each level of the Laplacian pyramid, followed by max pooling within filter bands and over spatio-temporal neighborhoods. Since the convolving and pooling are performed spatio-temporally, the coding model can capture structural and motion information simultaneously and provide an informative representation of actions. The proposed method achieves superb recognition rates on the KTH, the multiview IXMAS, the challenging UCF Sports, and the newly released HMDB51 datasets. It outperforms state of the art methods showing its great potential on action recognition.
Original languageEnglish
Pages (from-to)817-827
JournalIEEE Transactions on Cybernetics
Issue number6
Publication statusPublished - Jun 2014


Dive into the research topics of 'Spatio-Temporal Laplacian Pyramid Coding for Action Recognition'. Together they form a unique fingerprint.

Cite this