Learning from Distributional Features in Graph Corpora with Applications to Medical Image Analysis – In this paper, the task of training a new classifier on image data is presented. Based on the notion of the ‘good old-fashioned’ classifier, there is defined a new classifier based on its ability to infer the class label that is associated with the data. We provide experimental tests that show that the new classifier produces similar results as the existing classifier. Finally, it provides for the first time the results obtained using the popular Convolutional Neural Network technique.
In this work, we present a general framework to model a deep neural network (DNN) using a mixture of two types of inputs, namely: a first-class convolutional network, where the weights of the learned neural networks are calculated by the combination of training and labeling data. In this way, we extend the existing DNNs, including those based on the traditional back-propagation method, to the convolutional and network-based settings. These DNNs can be trained on either a single- or multiple-frame training set, making it possible to learn both types of input simultaneously for each network. Since both models are trained on one network, we can learn the weights both for the network and for each batch of data. Experimental results have been made to show the usefulness of such a model on both image retrieval and classification tasks from large-scale image databases.
Dynamic Systems as a Multi-Agent Simulation
Leveraging Latent Event Representations for Multi-Dimensional Modeling
Learning from Distributional Features in Graph Corpora with Applications to Medical Image Analysis
Automatic Tuning of Deep Convolutional Neural Networks Using Group Variant Registration in Image SegmentationIn this work, we present a general framework to model a deep neural network (DNN) using a mixture of two types of inputs, namely: a first-class convolutional network, where the weights of the learned neural networks are calculated by the combination of training and labeling data. In this way, we extend the existing DNNs, including those based on the traditional back-propagation method, to the convolutional and network-based settings. These DNNs can be trained on either a single- or multiple-frame training set, making it possible to learn both types of input simultaneously for each network. Since both models are trained on one network, we can learn the weights both for the network and for each batch of data. Experimental results have been made to show the usefulness of such a model on both image retrieval and classification tasks from large-scale image databases.
Leave a Reply