International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 11 Issue: 07 | July 2024
www.irjet.net
p-ISSN: 2395-0072
An Overview of Deep Learning: Milestones, Models and Trends Sheela N S1, Surekha Bijapur1, Dr. J Venkata Krishna2 1 Research Scholar, Dept. of Computer Science and Engineering, Srinivas University, Mangaluru, Karnataka, India 2 Associate Professor, Dept. of Computer Science and Engineering, Srinivas University, Mangaluru, Karnataka,
India ---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - Neural Networks are inspired by human brain, to
In 1949 Donald Hebb considered as the father of neural networks, introduced Hebbian Learning Rule, in his book “The Organization of Behavior” which lays the foundation of modern neural network. This rule explains how neurons adapt and form stronger connections through repeated use [3]. In 1958 Frank Rosenblatt introduced the first perceptron to mimic the human brain's learning process, which highly resembles modern perceptron [4].
make the computer to process data. Neural Networks are used in Deep Learning which is a subset of Machine Learning and Artificial Intelligence. Many models of Neural Networks are implemented based on the target applications. This paper provides an overview about deep learning. Initially, the development of deep learning is described. Later it focuses on the basic concepts of Neural Networks essential to understand the working. Thirdly it focuses on few key models such as Feed Forward Neural Network, Recurrent Neural Network, and Convolutional Neural network their architecture and working. At last, it discusses about the trends of deep learning.
Back-propagation a learning procedure which repeatedly adjusts the weights of the connections in the network to minimize a measure of the difference between the actual output vector of the net and the desired output vector is proposed by David E. Rumelhart et al. in 1986 [5].
Keywords: Deep Learning, Neural Networks, Artificial Intelligence, Machine Learning, Feed Forward Neural Network, Convolutional Neural Network, Recurrent Neural Network.
In 1998 LeNet-5 Developed by LeCun et al., LeNet was one of the first successful applications of convolutional neural networks (CNNs), a Gradient-Based Learning Applied to handwritten digit recognition tasks was a major step in learning from data [6].
1. INTRODUCTION Deep learning, a class of machine learning techniques teaches the computers to process data in a way the human brain does. Deep learning uses multilayered neural networks which is inspired by human brain to solve the complex problems. The key difference between deep learning and machine learning is the underlying neural network architecture. Machine learning uses simple neural network whereas Deep learning uses more layered neural networks.
Hinton et al. proposed deep belief networks (DBNs) in 2006, a generative model for digit classification. This reignited interest in deep learning by demonstrating their capability to learn hierarchical representations [7]. In 2012 Krizhevsky et al.'s AlexNet achieved significant breakthroughs using deep convolutional neural network to classify high-resolution images classification [8], marking the beginning of the deep learning era.
2. HISTORICAL DEVELOPMENT
Rob Fergus et al. made an improvement of AlextNet with greater accuracy and named as ZFNet [13] in 2013. A GoogleNet [19] was propsed by Szegedy et al. in 2014 called as Inception-V1 to assess the quality in the context of object detection and classification. In 2014 VGGNet [20] a deep convolutional network introduced by Simonyan et al. for large-scale image classification. Later in 2015 Kaiming He et al. used Residual Networks (ResNets) [21] for computer vision applications like object detection and image segmentation.
The evolution of deep learning can be traced back to the early days of artificial intelligence and neural networks. The primary goal of deep learning and artificial neural networks is to make a computer system to simulate human brain. The history of deep learning takes us back to 300 BC, Associationism theory stated by Aristotle [1]. Associationism is a theory states that mind is a set of conceptual elements that are organized as associations between these elements [2].
3. CONCEPTS OF NEURAL NETWORKS
The evolution of deep learning starts in 1942, with the concept of artificial neuron. Warren McCulloch and Walter Pitts developed a mathematical model which based on the working of basic biological neuron. That artificial neuron is called MCP-neuron. That laid the foundation for Deep Learning [2].
© 2024, IRJET
|
Impact Factor value: 8.226
In this section we are discussing very essential concepts and terminologies in order to understand deep learning [9]. Neuron: a neuron forms the basic structure of a neural network. A neuron is a mathematical model receives an
|
ISO 9001:2008 Certified Journal
|
Page 1326