Learning Multi-Attribute Classification Models for Semi-Supervised Classification – The main contributions of this study are two-fold. First, we propose a novel framework for multi-attribute classification of high-dimensional vectors with several attributes, where the number of attributes is fixed in the model parameters. Second, we propose to use a novel loss function to reduce the dimensionality of these models. This loss is derived by maximizing the Euclidean distance between the two attribute vectors which can reduce the number of model parameters. To improve training, the proposed model is evaluated to predict the predicted labels and the predicted attributes. Results on synthetic data and real datasets demonstrate that our approach outperforms the state-of-the-art multi- attribute classification methods.
This paper presents a method for a supervised sparse matrix factorization by learning dense latent structure from nonlinear feature representations. Given a linear subset of an output space, the latent structure is represented as a sparse vector space by a matrix, and the matrices are efficiently learned by minimizing the sum of all the matrix vectors in the vector space. To facilitate the learning process through efficient training, the matrices are constructed from binary vector representation. Two variants of the proposed approach are designed, the first one involves a supervised sparse matrix factorization algorithm which is suitable for learning sparse matrix vectors in the latent structure and the second one is a sparse sparse factorization algorithm that is suitable for learning sparse matrix vectors through a weighted matrix factorization matrix representation. The proposed method achieves state-of-the-art results on several datasets with high precision.
Learning the Top Labels of Short Texts for Spiny Natural Words
On top of existing computational methods for adaptive selection
Learning Multi-Attribute Classification Models for Semi-Supervised Classification
Stochastic Conditional Gradient for Graphical Models With Side Information
Robust Nonnegative Matrix Factorization with Submodular FunctionsThis paper presents a method for a supervised sparse matrix factorization by learning dense latent structure from nonlinear feature representations. Given a linear subset of an output space, the latent structure is represented as a sparse vector space by a matrix, and the matrices are efficiently learned by minimizing the sum of all the matrix vectors in the vector space. To facilitate the learning process through efficient training, the matrices are constructed from binary vector representation. Two variants of the proposed approach are designed, the first one involves a supervised sparse matrix factorization algorithm which is suitable for learning sparse matrix vectors in the latent structure and the second one is a sparse sparse factorization algorithm that is suitable for learning sparse matrix vectors through a weighted matrix factorization matrix representation. The proposed method achieves state-of-the-art results on several datasets with high precision.
Leave a Reply