Deep Learning is a subset of Machine Learning. ML algorithms are designed to perform a given tasks
without providing explicit instructions. A few types of these algorithms are Linear Regressions, Random
Forest, Decision Trees, Support Vector Machines. Deep Learning, specifically, use Artificial Neural Networks as the architecture for their algorithms.
Machine revolution began back in 1940 and has since been called by many names. Deep Learning had a
resurgence when Alex Krizhevsky released his monumental algorithm
AlexNet on 10 September, 2012.
Understanding Artificial Neural Network
Artificial Neural Network (ANNs) are also called Connectionist Systems and are loosely inspired by the
To make an ANN, a computer program is used to create virtual neurons or nodes. These nodes are connected
with together. The connections serve the purpose of information or data transmission between these
nodes. Each node is initially assigned a random value. This value is called weight. Each node is capable
of performing simple algebraic operations. The algebraic operation uses weight and data received from
the previous nodes.
Universal Approximation Theorem states that by modifying the weights of
the nodes, ANNs are capable of learning a relationship between input and output, if it exists.
What is the Input & Output for ANNs?
This is dependent on the task required to be performed. AlexNet was designed to recognize objects in the
images (Computer Vision). In this case the input was images and the output was the name of the object.
The AlexNet ANN was able to approximate the functions required to identify patterns of different
At Yobee Research, the
objective is to decode the complexities of Stock Markets. For this applications, the input is financial
data such as GDP, currency exchange rates, stock prices and the output is the movement of the market.