Interpolating Structural and Function Complexity of Neural Networks – Current Convolutional Neural Networks (CNNs) have been proven to be very successful methods for semantic classification. However, current CNNs use a very deep network architecture and have difficulty in handling the low-level semantic content. In this work, we show that a deep CNN trained on image semantic data is more robust to semantic content than a conventional CNN training. Further, we propose a method to learn deep CNNs that is similar to recurrent CNNs in that it is trained from a single input (i.e. a low-level classifier). The training dataset is distributed across multiple nodes in the network, and the network trainable on the dataset is sent to multiple nodes to train another CNN. The proposed method is used to achieve highly competitive performance on ImageNet classification task.
Recent work has shown that deep learning can be used as a platform for learning to predict future events. Despite this, it is still a challenging problem. It is unclear why such a simple yet useful network architecture can be used to achieve this, but there exist a few examples where Bayesian networks have been used in the past. We propose a novel framework to tackle this problem by leveraging the ability of deep architectures to be both modular and modular in order to address the challenges posed by the problem. Furthermore, we present a novel application of our framework for learning Deep Neural Networks from incomplete data.
Towards a Theory of a Semantic Portal
Interpolating Structural and Function Complexity of Neural Networks
A Simple and Effective Online Clustering Algorithm Using Approximate Kernel
An extended Stochastic Block model for learning Bayesian networks from incomplete dataRecent work has shown that deep learning can be used as a platform for learning to predict future events. Despite this, it is still a challenging problem. It is unclear why such a simple yet useful network architecture can be used to achieve this, but there exist a few examples where Bayesian networks have been used in the past. We propose a novel framework to tackle this problem by leveraging the ability of deep architectures to be both modular and modular in order to address the challenges posed by the problem. Furthermore, we present a novel application of our framework for learning Deep Neural Networks from incomplete data.
Leave a Reply