Face recognition is one of the important components of many smart devices apps, e.g., face unlocking, people tagging and games on smart phones, tablets, or smart glasses. Sparse Representation Classification (SRC) is a state-of-the-art face recognition algorithm, which has been shown to outperform many classical face recognition algorithms in OpenCV, e.g., Eigenface algorithm. The success of SRC is due to its use of 21 optimization, which makes SRC robust to noise and occlusions. Since 21 optimization is computationally intensive, SRC uses random projection matrices to reduce the dimension of the 21 problem. However, random projection matrices do not give consistent classification accuracy as they ignored the prior knowledge of the training set. In this paper, we propose to exploit the prior knowlege of the training set to improve the recognition accuracy. It first learns the optimized projection matrix from the training set to produce consistent recognition performance then applies 21-based classification based on the group sparsity structure of SRC to further improve the recognition accuracy. Our evaluations, based on publicly available databases and real experiment, show that face recognition using optimized projection matrix is 8-17 percent more accurate than its random counterpart and Eigenface algorithm, and the recognition accuracy can be further improved by up to 5 percent by exploiting group sparsity structure. Furthermore, the optimized projection matrix does not have to be re-calculated even if new faces are added to the training set. We implement the SRC with optimized projection matrix on Android smartphones and find that the computation of residuals in SRC is a severe bottleneck, taking up 85-90 percent of the computation time. To address this problem, we propose a method to compute the residuals approximately, which is 50 times faster with little sacrificing recognition accuracy. Lastly, we demonstrate the feasibility of our new algorithm by the implementation and evaluation of a new face unlocking app and show its robustness to variation of poses, facial expressions, lighting changes, and occlusions.