Review of Deep Learning Algorithms and Architectures

Loading...
Thumbnail Image

Authors

Shrestha, Ajay
Mahmood, Ausif

Issue Date

2019-04-22

Type

Article

Language

en_US

Keywords

Machine learning algorithm , Optimization , Artificial intelligence , Deep neural network architectures , Convolution neural network , Backpropagation , Supervised and unsupervised learning

Research Projects

Organizational Units

Journal Issue

Alternative Title

Abstract

Deep learning (DL) is playing an increasingly important role in our lives. It has already made a huge impact in areas, such as cancer diagnosis, precision medicine, self-driving cars, predictive forecasting, and speech recognition. The painstakingly handcrafted feature extractors used in traditional learning, classification, and pattern recognition systems are not scalable for large-sized data sets. In many cases, depending on the problem complexity, DL can also overcome the limitations of earlier shallow networks that prevented efficient training and abstractions of hierarchical representations of multi-dimensional training data. Deep neural network (DNN) uses multiple (deep) layers of units with highly optimized algorithms and architectures. This paper reviews several optimization methods to improve the accuracy of the training and to reduce training time. We delve into the math behind training algorithms used in recent deep networks. We describe current shortcomings, enhancements, and implementations. The review also covers different types of deep architectures, such as deep convolution networks, deep residual networks, recurrent neural networks, reinforcement learning, variational autoencoders, and others.

Description

Citation

A. Shrestha and A. Mahmood, "Review of Deep Learning Algorithms and Architectures," in IEEE Access, vol. 7, pp. 53040-53065, 2019.

Publisher

IEEE

License

Journal

Volume

Issue

PubMed ID

DOI

ISSN

EISSN