IEEE/CAA Journal of Automatica Sinica  2015, Vol.2 Issue (2): 226-232   PDF    
View-invariant Gait Authentication Based on Silhouette Contours Analysis and View Estimation
Songmin Jia , Lijia Wang, Xiuzhi Li    
1. College of Electronic Information & Control Engineering, Beijing University of Technology, Beijing 100124, China;
2. Department of Information Engineering and Automation, Hebei College of Industry and Technology, Shijiazhuang 050091, China;
3. College of Electronic Information & Control Engineering, Beijing University of Technology, Beijing 100124, China
Abstract: In this paper, we propose a novel view-invariant gait authentication method based on silhouette contours analysis and view estimation. The approach extracts Lucas-Kanade based gait flow image and head and shoulder mean shape (LKGFI-HSMS) of a human by using the Lucas-Kanade's method and procrustes shape analysis (PSA). LKGFI-HSMS can preserve the dynamic and static features of a gait sequence. The view between a person and a camera is identified for selecting the target's gait feature to overcome view variations. The similarity scores of LKGFI and HSMS are calculated. The product rule combines the two similarity scores to further improve the discrimination power of extracted features. Experimental results demonstrate that the proposed approach is robust to view variations and has a high authentication rate.
Key words: Silhouette contours analysis     view estimation     Lucas-Kanade based gait flow image     head and shoulder mean shape     gait recognition    
I. INTRODUCTION

Gait analysis is a newly emerged biometrics which utilizes the manner of walking to recognize an individual[1]. During the past decade,gait analysis has been extensively investigated in the computer vision community. The approaches for gait analysis can be mainly divided into two categories: model-based methods and appearance-based methods[2]. Model-based gait analysis methods[3, 4] focus on studying the movement of various body parts and obtaining measurable parameters. These methods can be less affected by the viewpoint[5]. However,complex searching and mapping processes will significantly increase the size of the feature space and computational cost[6].

Appearance-based gait analysis methods do not need prior modeling, but operate on silhouette sequences to capture gait characteristics[6]. Statistical methods[7, 8, 9] are usually applied to describe the silhouettes for their low computational cost. Among them,gait energy image (GEI)[10] is a commonly used method. It reflects the major shape of the gait silhouettes and their changes over a gait cycle. However,it loses some intrinsic dynamic characteristics of the gait pattern. Zhang et al.[2] proposed active energy image (AEI) constructed by calculating the active regions of two adjacent silhouette images for gait representation. AEI is projected to a low-dimensional feature subspace via a two-dimensional locality preserving projections (2DLPP) method for improving the discrimination power. Afterwards,Roy et al.[11] took advantage of the key pose state of a gait cycle for extracting a novel feature called pose energy image (PEI). However,it suffers a high computational burden when dealing with PEI. To overcome this shortcoming,pose kinematics was applied to select a set of most probable classes before using PEI. Pratik et al.[12] presented a pose depth volume (PDV) method by using a partial volume reconstruction of the frontal surface of each silhouette. Zeng et al.[13] proposed a radial basis function (RBF) network based method to extract gait' dynamics feature. For recognition,a constant RBF network is obtained from training samples,then the test gait can be recognized according to the smallest error principle. Recently, Lam et al.[14] constructed a gait flow image (GFI) by calculating an optical flow field between consecutive silhouettes in a gait period to extract motion feature. However, Horn-Schunck's approach is used to obtain the optical flow field,which results in computational complexity. To alleviate the computational cost,Lucas-Kanade's optical flow based gait flow image (LKGFI)[15] considering view is proposed to identify human. The Lucas-Kanade's approach,which is in low computing cost,is adopted to calculate the optical flow field. However,the LKGFI can only capture the motion characteristic of a gait,but ignore the structural characteristic which is also important in representing a gait.

Procrustes shape analysis (PSA) is a popular method in direction statistics to obtain a shape vector which is invariant to scale, translation and rotation[6]. Wang et al.[7] used the mean shape of silhouette contours analyzed with PSA to capture the structural characteristic of a gait,especially the shape cues of body biometrics. Choudhury et al.[6] presented a recognition method combining spatio-temporal motion characteristics and statistical and physical parameters (STM-SPP) by analyzing the shape of silhouette contours using PSA and elliptic Fourier descriptors (EFDs). This feature is usually robust to carrying condition. Inspired by STM-SPP,we extract the head and shoulder mean shape (HSMS),which is rarely changed while a person is walking,as the static feature of a gait by using the PSA.

Most of the existing recognition methods can achieve good performance in the single view case[16, 17]. These methods can only be conducted with the assumption of the same view,which will significantly limit their potential applications in real-world. To overcome this drawback,Bodor et al.[18] proposed to use image-based rendering to generate optimal inputs sequences for view-independent motion recognition. The view is constructed from a combination of non-orthogonal views taken from several cameras. Liu et al.[19] conducts joint subspace learning (JSL) to obtain the prototypes of different views. Then,a gait is represented by the coefficients from a linear combination of these prototypes in the corresponding views. Kusakunniran et al.[20] developed a view transformation model (VTM) by adopting singular value decomposition (SVD) technique to project gait features from one viewing angle to the other one. However,it is hard to obtain an optimized VTM,and this method depends on the number of training data which will lead to huge memory consumption. Considering gait sequences collected from different views of the same person as a feature set,Liu et al.[21] devised a subspace representation (MSR) method to measure the variances among samples. Then,a marginal canonical correlation analysis (MCCA) method is proposed to exploit the discriminative information from the subspaces for recognition.

In this study,we propose a novel method based on analyzing silhouette contours and estimating view for gait recognition. The basic idea of silhouette contours analysis is to extract LKGFI-HSMS from a gait sequence for representing dynamic and static characteristics. LKGFI-HSMS is constructed by calculating an optical flow field with the Lucas-Kanade's approach and analyzing the head and shoulder shape with PSA. In our previous study[22],a view-invariant gait recognition method based on LKGFI and view has been proposed. To further improve recognition rate,HSMS is introduced to represent the static information. HSMS characterizes a person's head and shoulder contours by analyzing the configuration matrix with PSA. Moreover,to overcome the view problem in gait recognition,view is identified as a criterion to select the target's gait features in a feature database. Finally,our method is conducted on gait database,and the results illustrate that our method is improved in terms of computation cost and recognition rate. The contributions of this paper are as follows:

1) As a static feature of a human motion,HSMS is firstly proposed. The HSMS is the head and shoulder mean shape extracted from a gait sequence by using the Procrustues Shape Analysis method. Since the head and shoulder silhouette is usually unsheltered,the HSMS is stable when applied on gait authentication.

2) The HSMS and LKGFI are combined to analyze gait. The HSMS captures the most salient static feature from a gait sequence, while the LKGFI extracts more dynamic information of a human motion. Taking advantage of both the features,the LKGFI-HSMS has a powerful discriminative ability.

3) To resolve the view problem,walking direction is estimated and used as a selection criterion for gait authentication. The walking direction is computed based on the positions and heights at the walking beginning and ending points in a gait cycle. View is identified according to the obtained walking direction. For gait authentication,the view is applied to select the target's gait features in a feature database. This approach guarantees our method performs well when there are view changes.

II. OVERVIEW OF THE APPROACH

As mentioned above,most previous work adopted low-level information such as single static features (GEI[10],STM-SPP[6]) or single dynamic information,(AEI[2],GFI[14]). Commonly, the dynamic features included in human walking motion are sufficient for gait analysis. However,it is unstable when there are self-occlusions of limbs. For obtaining optimal performance,the statistical analysis should be applied to the temporal patterns of an individual. Therefore,we consider higher level information including both the static and dynamic features for gait recognition. Moreover,most of the available gait analysis algorithms achieve good performance in the frontal view. However,their performance is seriously affected when the view changes. Based on the above consideration,we present a novel method based on analyzing silhouette contours and estimating view for gait recognition.

The proposed approach is shown in Fig.1. For each input sequence,preprocessing is firstly conducted for extracting gait silhouettes and gait cycle. Secondly,the gait's dynamic feature LKGFI and static feature HSMS are extracted from the gait silhouettes. Simultaneously,the walking direction is recognized and the view between the person and the camera is determined. Finally,the view is applied for finding the target's LKGFI (HSMS) in the corresponding database estimated in advance,and then the similarity scores are calculated by using the Euclidean distance metric. Furthermore,the product rule is applied to combine the classification results of LKGFI and HSMS.

Download:
Fig. 1. The illustration of human recognition using LKGFI-HSMS and view.
III. GAIT RECOGNITION USING LKGFI-HSMS AND VIEW

Here,two classifiers are estimated for gait recognition: one is based on dynamic feature LKGFI and view,the other is based on the static feature HSMS and view. The scores obtained from the two classifiers represent two gait's similarity. However,a single classifier's performance is unsatisfactory. To further improve algorithm's efficiency and accuracy,the two classifiers are combined by using a product rule.

A. Silhouettes Extraction

Silhouettes extraction involves segmenting regions corresponding to a walking person in a cluttered scene[6]. The gait sequence is first processed by using background subtraction, binarization,Morphologic processing and connected-component analysis. Then,the silhouette images in each frame are extracted and their bounding boxes are computed. Furthermore,silhouettes are normalized to fixed size (120×200) and aligned centrally with respect to their horizontal centers to eliminate size difference caused by the distance between the person and the camera. In addition,gait cycle is estimated in an individual's walking sequence according to the height-width ratios of silhouettes[14]. After gait period is determined,silhouette images would further be divided into several cycles and hence made ready for generating LKGFI and HSMS.

B. Human Recognition Based on LKGFI and View

Lam[14] proposed GFI for human recognition. An optical flow field or image velocity field is calculated from a gait sequence with the Horn-Schunck's method to generate GFI. The optical flow field represents the motion of the moving human in the scene. Therefore,GFI contains the human's motion information in a gait period. Unfortunately,every pixel in the image should be computed to obtain the dense optical flow in the Horn-Schunck's method, which will suffer a high computational burden. In contrast, Lucas-Kanade's approach is a popular sparse optical flow function,where a group of corners of the image are extracted before the optical flow is calculated. Therefore,Lucas-Kanade's approach can reduce computing cost and be applied to generate LKGFI[22]. The LKGFIs of a person under different views are shown in Fig.2.

Download:
Fig. 2. The illustration of LKGFIs of a person under different views.

Most of the existing gait recognition methods suffer from view variations. In order to resolve this problem, we have proposed a gait recognition approach based on LKGFI and view in the literature[15,22]. A person's walking direction is computed based on the positions and heights at the walking beginning and ending points in a gait cycle as shown in Fig. 3. We suppose that the walking direction is 0 when a person walks towards the camera along its optical axis and it is 90 when the person walks in parallel with the camera. The walking direction is divided into four main categories based on the four quadrants of the camera's coordinate system. Then, the view is identified according to the relationship between walking directions and views.

Download:
Fig. 3. The illustration of determining walking direction. α is the walking direction. θ is the angle between the walking direction and y axis. xb and xe are the starting point and the ending point in the image plane, respectively. hb and he are the silhouettes’ heights at xb and xe.

For human authentication,a database about a target's gait features (LKGFI) under different views is established in advance. Then,once there is a passerby,his/her LKGFI will be extracted and the walking direction will be computed. Afterwards,the view, identified according to the walking direction,is applied to select the target's LKGFI in the feature database. At last,the gait authentication is solved by measuring similarities among gait sequences. We make use of the Euclidean distance to measure the similarity. Obviously,the two gaits,having the smaller distance measures,are more similar.

$\begin{array}{1} s{c_{lkgfi}} = \sqrt {\sum\limits_{n = 0}^N {{{(LKGF{I_{Gn}} - LKGF{I_{Pn}})}^2}} } , \end{array}$ (1)

where N = 120 × 200 is the number of pixels in the LKGFI image. LKGFIGn and LKGFIPn are the values of the n-th point in the target0s image LKGFIG and the person0s image LKGFIP , respectively.

The LKGFI database of the target under different views is established in advance for this method. In the phase of gait recognition,the view is estimated precisely and treated as the criterion to choose the target's gait feature. The proposed method is robust to view variations.

C. Human Recognition Based on HSMS and View

As a prominent feature of upper-body,the head and shoulder silhouette can be identified easily and is usually unsheltered. It is an ideal module for identifying a human being whether the person is under the frontal view or the rear view. In addition, PSA provides an efficient approach to obtain mean shape, especially for coping with 2-D shapes[6]. The obtained shape feature is invariant to scale,translation and rotation. Moreover, the Procrustes mean shape has considerable discriminative power for identifying individuals[7]. Thus,this work extracts HSMS from a gait sequence using PSA for human recognition.

Once a person's silhouettes have been obtained,the head and shoulder models can be extracted according to morphology (i.e., 0.35$H$ measured from the top of the bounding box[10] where $H$ is the box's height). Then,each head and shoulder contour is approximated by 100 points $(x_i,y_i),i = 1,2,\cdots,100$ using interpolation based on point correspondence analysis[23]. Furthermore,the contour is defined in a complex plane as $Z=[z_1,z_2,\cdots,z_i,\cdots,z_k],z_i = x_i+jy_i,k = 100$.

The shape centroid $(\bar{x},\bar{y})$ is computed by:

$\begin{array}{2} \bar x = \frac{1}{k}\sum\limits_{i = 1}^k {{x_i}} ,\quad \bar y = \frac{1}{k}\sum\limits_{i = 1}^k {{y_i}} . \end{array}$ (2)

To get the configuration matrix,it is necessary to center the shape by defining a complex vector $U=[u_1,u_2,\cdots,u_i,\cdots,u_k]^{\rm T},u_i=z_i-\bar{z},\bar{z}=(\bar{x},\bar{y})$. Given a set of $N$ head and shoulder shapes in a gait cycle,$N$ complex vectors $U_i~(i=1,2,\cdots,N)$ are gotten and the configuration matrix is computed:

$\begin{array}{3} S = \sum\limits_{i = 1}^N {\frac{{{U_i}U_i^*}}{{U_i^*{U_i}}}} , \end{array}$ (3)

where the superscript ``$*$'' means the complex conjugate transpose.

As a result,the Procrustes mean shape $\hat{U}$ corresponding to the largest eigenvector of $S$ represents HSMS for gait recognition. $\hat{U}$ is a complex vector whose length is $k$. It characterizes the head and shoulder shape of a gait sequence. The illustration of the HSMS is shown in Fig.4.

Download:
Fig. 4. The illustration of HSMS.

The gait recognition method using HSMS and view is similar to the approach based on LKGFI and view mentioned above. The similarity score between the target's HSMS and the person's HSMS is calculated as:

$\begin{array}{4} s{c_{hsms}} = \sqrt {\sum\limits_{n = 0}^k {{{(HSM{S_{Gn}} - HSM{S_{Pn}})}^2}} } , \end{array}$ (4)

where $k=100$ is the number of HSMS. $HSMS_{Gn}$ and $HSMS_{Pn}$ are the $n$-th points of the target's $HSMS_G$ and the person's $HSMS_P$,respectively.

D. Combining Rule

We have calculated the similarity scores for LKGFI and HSMS, respectively. However,the recognition rate obtained by using a single classifier is unsatisfactory. To improve the system's efficiency and accuracy,the two classifiers are combined. Since a variety of fusion approaches are available for combining independent classifiers,we adopted the product rule.

These similarity scores cannot be directly combined because they are with quite different ranges and distributions. Therefore,to make these scores comparable for fusion,they must be transformed to a same scale. The linear mapping method is applied here.

$\begin{array}{5} {\hat {sc}_{lkgfi}} = \frac{{s{c_{lkgfi}} - {\text{min}}(S{C_{LKGFI}})}}{{{\text{max}}(S{C_{LKGFI}}) - {\text{min}}(S{C_{LKGFI}})}}, \end{array}$ (5)

and

$\begin{array}{6} {\widehat {sc}_{hsms}} = \frac{{s{c_{hsms}} - {\text{min}}(S{C_{HSMS}})}}{{{\text{max}}(S{C_{HSMS}}) - {\text{min}}(S{C_{HSMS}})}}, \end{array}$ (6)

where $SC_{LKGFI}$ and $SC_{HSMS}$ are the vectors of the similarity scores before mapping,while $sc_{lkgfi}$ and $sc_{hsms}$ are the elements of the $SC_{LKGFI}$ and $SC_{HSMS}$. $\hat{sc}_{lkgfi}$ and $\hat{sc}_{hsms}$ are the results of the linear mapping.

Then,the product rule combines $\hat{sc}_{lkgfi}$ and $\hat{sc}_{hsms}$ to obtain the final decision,which enhances the verification performance. The product rule is:

$\begin{array}{7} {F_{BOTH}} = {\widehat {sc}_{lkgfi}} \times {\widehat {sc}_{hsms}}. \end{array}$ (7)

The human authentication based on LKGFI-HSMS is implemented as follows:

Step 1. A database about a target's gait features (LKGFI and HSMS) under different views is established in advance.

Step 2. Once a person is coming,his/her gait features ($LKGFI_P$ and $HSMS_P$) are extracted.

Step 3. Walking direction of the passerby person is computed and view is obtained.

Step 4. The target's gait features ($LKGFI_G$ and $HSMS_G$) under the special view are selected from the established database.

Step 5. The similarity scores for LKGFI and HSMS are obtained according to (1) and (4),respectively.

Step 6. The similarity scores from step 5 are mapped according to (5) and (6).

Step 7. The mapped similarity scores are combined according the product rule (7) to obtain the final decision.

Step 8. The passerby is identified according to the relationship between the final decision and a threshold.

IV. EXPERIMENTAL RESULTS A. Experiment on CASIA B Database

This section presents the results of person identification experiments conducted on the CASIA B database. This database contains sequences of 124 persons from 11 views,namely $0^\circ$, $18^\circ$,$36^\circ$,$54^\circ$,$72^\circ$,$90^\circ$, $108^\circ$,$126^\circ$,$144^\circ$,$162^\circ$,$180^\circ$. Six normal gait sequences are captured for each person under different views. The database is divided into 2 subgroups: the first subgroup of one person is used for establishing the LKGFI (HSMS) database; the rest of other persons are for evaluating the verification rate of the proposed recognition algorithm. For view identification,the direction range of $0^\circ\sim180^\circ$ is divided into 11 regions for the views ($0^\circ$,$18^\circ$, $36^\circ$,$54^\circ$,$\cdots$,$180^\circ$). The first region of $0^\circ\sim9^\circ$ is for the view $0^\circ$. The last region of $171^\circ\sim180^\circ$ is for the view $180^\circ$. The middle area is equally divided and each region has a range of $18^\circ$: $9^\circ\sim27^\circ$ for the view $18^\circ$, $27^\circ\sim45^\circ$ for $36^\circ$,$45^\circ\sim63^\circ$ for $54^\circ$,etc. All experiments are implemented using Opencv 2.1 in the Microsoft Visual Studio 2008 Express Edition environment.

The verification performance of LKGFI-HSMS is compared with the following gait representation approaches: GEI,GFI,LKGFI,HSMS, AEI[2],and STM-SPP[6]. The target is the $001^{th}$ person in the CASIA B database. The testing gait sequences include 124 persons walking under 11 views. LKGFI-HSMS can preserve more dynamic and static information of a person's silhouette contours using the Lucas-Kanade's method and PSA. It can achieve a very good discrimination power and low computing cost. The testing person can walk under different views. The view is extracted using the proposed approach to select the target's gait feature before recognition,which can deal with view variations. To evaluate the efficiency of the GFI and LKGFI,all of the gait sequences for every person are tested one time. As a result,we can get the average time consume illustrated in Table I. To construct the GFI[12],the Horn-Schunck's method,suffering a high computational load,is adopted to get the dense optical flow. In contrast,the Lucas-Kanade's approach is used to construct the LKGFI to reduce the computational cost. It is shown that the LKGFI can satisfy the real-time requirement. Moreover,the results show that both the LKGFI and GEI can reach a lower operation time (12ms for LKGFI and 11ms for GEI). However,the LKGFI can achieve better performance than GEI,which will be discussed below. To further evaluate the proposed method,we compare the average false rejection rates (FRR) at the false acceptance rates (FAR) of $4\,\%$ and $10\,\%$ for different gait representations[4]. The LKGFI only captures the motion characteristic of a gait sequence in a period,while the HSMS is only a shape cue. The LKGFI-HSMS can capture both the motion and structural characteristics of a gait. Therefore,the LKGFI-HSMS has the most discrimination power. The results are shown in Table II. The FRRs achieved by LKGFI-HSMS are the least in the same FAR cases. Fig.5 shows the receiver operating characteristic (ROC) curves. The $x$-axis is for the false rejection rate,while the $y$-axis is for the false acceptance rate. The smaller the area below the ROC curve is,the better the performance is. The experiment results show that the area below the ROC curve achieved by LKGFI-HSMS is the smallest. It is evident from Table II and Fig.5 that the verification performance of LKGFI-HSMS is better than the other gait representations and our proposed approach is robust to view variations.

Table I
AVERAGE TIME CONSUME OF CONSTRUCTING GFI, GEI AND LKGFI

Download:
Fig. 5. Receiver operating characteristic (ROC) (%) by different gait representations: GEI, LKGFI, HSMS, AEI, STM-SPP, and LKGFIHSMS on CASIA B database.

Table II
AVERAGE FALSE REJECTION RATES (%) AT THE FALSE ACCEPTANCE RATE (%) OF 4 % AND 10 % FOR GEI, LKGFI, HSMS, AEI, STM-SPP, AND LKGFI-HSMS ON CASIA B DATABASE
B. Experiment on NLPR Database

This section presents the experimental results conducted on the NLPR database. This database contains 2 880 sequences for 20 persons from 3 views $(0^\circ,45^\circ,90^\circ)$. Four normal sequences are captured under each views. For identification,the direction range $0^\circ\sim90^\circ$ is divided into 3 regions for each views,which means: $0^\circ\sim36^\circ$ for $0^\circ$, $37^\circ\sim63^\circ$ for $45^\circ$,and $64^\circ\sim90^\circ$ for $90^\circ$. The sequences for the person ``fyc'' is chosen as the target,while the rest of other persons are for evaluating the verification rate of the LKGFI-HSMS,GEI,GFI,LKGFI,HSMS,AEI,and STM-SPP. We test these methods in two terms: the first one is the average FRR at FAR of $4\,\%$ and $10\,\%$ for different gait representation,the second is the ROC curves. The results are shown in Table III and Fig. 6,respectively. The results further illustrate that the verification of the LKGFI-HSMS is better than the other gait representations.

Table III
AVERAGE FALSE REJECTION RATES (%) AT THE FALSE ACCEPTANCE RATE (%) OF 4, % AND 10 % FOR GEI, LKGFI,HSMS, AEI, STM-SPP, AND LKGFI-HSMS ON NLPR DATABASE

Download:
Fig. 6. Receiver operating characteristic (ROC) (%) by different gait representations: GEI, LKGFI, HSMS, AEI, STM-SPP, and LKGFIHSMS on NLPR database.
V. CONCLUSION

In this paper,we have proposed a gait representation,i.e., LKGFI-HSMS,which can analyze the motion and shape of silhouette contours of a person in a video sequence. Firstly, LKGFI was generated by calculating an optical flow field with the Lucas-Kanade's method. This method can preserve more motion information of the silhouette contours and significantly reduce the processing time. Secondly,HSMS characterizing the head and shoulder shape of the person in a gait cycle was analyzed by using PSA. It has a very good discrimination power and lower computing cost due to the part-based shape analysis instead of whole contour shape analysis. Thirdly,considering view,the method can deal with view variations. Furthermore,the verification performance was enhanced by combining the above two classifiers. Experimental results have shown the effectiveness of the proposed approach compared with several related silhouette-based gait recognition methods.

References
[1] Yang X C, Zhou Y, Zhang T H, Shu G. Yang J. Gait recognition based on dynamic region analysis. Signal Processing, 2008, 88(9):2350- 2356
[2] Zhang E, Zhao Y W, Xiong W. Active energy image plus 2DLPP for gait recognition. Signal Processing, 2010, 90(7):2295-2302
[3] Boulgouris N V, Chi Z X. Human gait recognition based on matching of body components. Pattern Recognition, 2007, 40(6):1763-1770
[4] Zhang R, Vogler C, Metaxas D. Human gait recognition at sagittal plane. Image and Vision Computing, 2007, 25(3):321-330
[5] Liu Y Q, Wang X. Human gait recognition for multiple views. Procedia Engineering, 2011, 15:1832-1836
[6] Choudhury S D, Tjahjadi T. Silhouette-based gait recognition using Procrustes shape analysis and elliptic Fourier descriptors. Pattern Recognition, 2012, 45(9):3414-3426
[7] Wang L, Tan T N, Hu W M, Ning H Z. Automatic gait recognition based on statistical shape analysis. IEEE Transactions on Image Processing, 2003, 12(9):1120-1131
[8] Shutler J D, Nixon M S. Zernike velocity moments for sequence-based description of moving features. Image and Vision Computing, 2006, 24(4):343-356
[9] Shutler J D, Nixon M S, Harris C J. Statistical gait description via temporal moments. In:Proceedings of the 4th IEEE Southwest Symposium on Image Analysis and Interpretation. Austin, TX:IEEE, 2004. 291-295
[10] Han J, Bhanu B. Individual recognition using gait energy image. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(2):316-322
[11] Roy A, Sural S, Mukherjee J. Gait recognition using pose Kinematics and pose energy image. Signal Processing, 2012, 92(3):780-792
[12] Pratik C, Aditi R, Shamik S, Jayanta M. Pose depth volume extraction from RGB-D streams for frontal gait recognition. Journal of Visual Communication and Image Representation, 2014, 25(1):53-63
[13] Zeng W, Wang C, Yang F F. Silhouette-based gait recognition via deterministic learning. Pattern Recognition, 2014, 47(11):3568-2584
[14] Lam T H W, Cheung K H, Liu J N K. Gait flow image:a silhouettebased gait representation for human identification. Pattern Recognition, 2011, 44(4):973-987
[15] Jia S M, Wang L J, Wang S, Li X Z. Personal identification combining modified gait flow image and view. Optical and Precision Engineering, 2012, 20(11):2500-2507
[16] Wang L, Ning H, Tan T, Hu W. Fusion of static and dynamic body biometrics for gait recognition. IEEE Transactions on Circuits and Systems for Video Technology, 2004, 14(2):149-158
[17] Barnich O, Droogenbroech M V. Frontal-view gait recognition by intraand inter-frame rectangle size distribution. Pattern Recognition Letters, 2009, 30(10):893-901
[18] Bodor R, Drenner A, Fehr D, Masoud O, Papanikolopoulos N. Viewindependent human motion classification using image-based reconstruction. Image and Vision Computing, 2009, 27(8):1197-1206
[19] Liu N, Lu J W, Tan Y P. Joint subspace learning for view-invariant gait recognition. IEEE Signal Processing Letters, 2011, 18(7):431-434
[20] Kusakunniran W, Wu Q, Li H D, Zhang J. Multiple views gait recognition using view transformation model based on optimized gait energy image. In:Proceedings of the 12th IEEE International Conference on Computer Vision Workshops. Kyoto:IEEE, 2009. 1058-1064
[21] Liu N N, Lu J W, Yang G, Tan Y P. Robust gait recognition via discriminative set matching. Journal of Visual Communication and Image Representation, 2013, 24(4):439-447
[22] Wang L J, Jia S M, Li X Z, Wang S. Human gait recognition based on gait flow image considering walking direction. In:Proceedings of the 2012 IEEE International Conference on Mechatronics and Automation. Chengdu, China:IEEE, 2012. 1990-1995
[23] Mowbray S D, Nixon M S. Automatic gait recognition via Fourier descriptors of deformable objects. In:Proceedings of the 4th International Conference on Audio- and Video-Based Biometric Person Authentication. Guildford:Springer-Verlag, 2003. 556-573