UB ScholarWorks

A Framework for Designing the Architectures of Deep Convolutional Neural Networks

Show simple item record

dc.contributor.author Albelwi, Saleh
dc.contributor.author Mahmood, Ausif
dc.date.accessioned 2018-05-08T19:05:07Z
dc.date.available 2018-05-08T19:05:07Z
dc.date.issued 2017-05-24
dc.identifier.citation Albelwi, S.; Mahmood, A. A Framework for Designing the Architectures of Deep Convolutional Neural Networks. Entropy 2017, 19, 242. en_US
dc.identifier.other 10.3390/e19060242
dc.identifier.uri https://scholarworks.bridgeport.edu/xmlui/handle/123456789/2240
dc.description.abstract Recent advances in Convolutional Neural Networks (CNNs) have obtained promising results in difficult deep learning tasks. However, the success of a CNN depends on finding an architecture to fit a given problem. A hand-crafted architecture is a challenging, time-consuming process that requires expert knowledge and effort, due to a large number of architectural design choices. In this article, we present an efficient framework that automatically designs a high-performing CNN architecture for a given problem. In this framework, we introduce a new optimization objective function that combines the error rate and the information learnt by a set of feature maps using deconvolutional networks (deconvnet). The new objective function allows the hyperparameters of the CNN architecture to be optimized in a way that enhances the performance by guiding the CNN through better visualization of learnt features via deconvnet. The actual optimization of the objective function is carried out via the Nelder-Mead Method (NMM). Further, our new objective function results in much faster convergence towards a better architecture. The proposed framework has the ability to explore a CNN architecture’s numerous design choices in an efficient way and also allows effective, distributed execution and synchronization via web services. Empirically, we demonstrate that the CNN architecture designed with our approach outperforms several existing approaches in terms of its error rate. Our results are also competitive with state-of-the-art results on the MNIST dataset and perform reasonably against the state-of-the-art results on CIFAR-10 and CIFAR-100 datasets. Our approach has a significant role in increasing the depth, reducing the size of strides, and constraining some convolutional layers not followed by pooling layers in order to find a CNN architecture that produces a high recognition performance. en_US
dc.description.uri https://doi.org/10.3390/e19060242
dc.language.iso en_US en_US
dc.publisher MDPI en_US
dc.subject Convolutional neural network en_US
dc.subject Deconvolutional network en_US
dc.subject Correlation coefficient en_US
dc.subject Deep learning en_US
dc.subject Nelder-Mead method en_US
dc.title A Framework for Designing the Architectures of Deep Convolutional Neural Networks en_US
dc.type Article en_US
dc.publication.issue 6 en_US
dc.publication.name Entropy en_US
dc.publication.volume 19 en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search ScholarWorks


Advanced Search

Browse

My Account