Abstract—Sparse representation based classification (SRC) has attracted much attention in face analysis such as and . Currently, most of SRC based methods treated face as a whole component which results in under-utilization of the complementary in different facial parts. In this paper, we present an approach which can effectively explore the complementary of different facial parts to boost the performance of face analysis. In particular, we employ multi-view sparse coding techniques to learn the factorized representation of different facial components. Furthermore, we incorporate label information into the objective function to enforce the discriminability. To evaluate the performance, we conduct face analysis experiments including FR and FER on JAFFE database. Experimental results demonstrate that the proposed method can significantly boost the performance of face analysis.
Self-explanatory Convex Sparse Representation for Image classification
Baodi Liu, Yuxiong Wang, Bin Shen, Yujin Zhang, Yanjiang Wang, Weifeng Liu
Abstract-Sparse representation technique has been widely used in various areas of computer vision over the last decades. Unfortunately, in the current formulations, there are no explicit relationship between the learned dictionary and the original data. By tracing back and connecting sparse representation with the $K$-means algorithm, a novel variation scheme termed as self-explanatory convex sparse representation (SCSR) has been proposed in this paper. To be specific, the basis vectors of the dictionary are refined as convex combination of the data points. The atoms now would capture a notion of centroids similar to K-means, leading to enhanced interpretability. Sparse representation and K-means are thus unified under the same framework in this sense. Besides, an appealing property also emerges that the weight and code matrices both tend to be naturally sparse without additional constraints. Compared with the standard formulations, SCSR is easier to be extended into the kernel space. To solve the corresponding sparse coding subproblem and dictionary learning subproblem, block-wise coordinate descent and Lagrange multipliers are proposed accordingly. To validate the proposed algorithm, it is implemented in image classification, a successful applications of sparse representation. Experimental results on several benchmark data sets, such as UIUC-Sports, Scene 15, and Caltech-256 demonstrate the effectiveness of our proposed algorithm.