Neural Network From Scratch

Share this post

What is a neural network?

Neural networks are one of the main tools used in machine learning. As neural suggests, they are brain inspired systems which are intended to replicate the way that we humans learn. NNs consist of input and output layers, as well as a hidden layer consisting of units that transform the input.

Neural Network Diagram

They are excellent tools for finding patterns which are far too complex or numerous for a human programmer to extract and teach the machine to recognize.

Steps for designing a neural network:

  • Define independent and dependent variables.
  • Define hyper-parameters.
  • Define activation function and the derivative.
  • Train the model.
  • Make predictions.
#Import libraries
import numpy as np

Step #1: Variables.

#Variables
#independent variables
training_set = np.array([[0,1,0],[0,0,1],[1,0,0],[1,1,0],[1,1,1],[0,1,1],[0,1,0]])

# dependent variables
labels =np.array([[1,0,0,1,1,0,1]])
labels=labels.reshape(7,1)
Independent Variables
Dependent variables
Reshape Dependent Varaiables

The input set contains seven records. Similarly, we also created a labels set that contains corresponding labels for each record in the input set. The labels are the values that we want our ANN to predict.

Step #2: Hyper-parameters.

Here Random Seed helps get the same values upon recursive execution and Lr is the learning rate; lr is the step size at each iteration while moving towards a minimum of a loss function.

#Step 02
#heyper-paprameters
np.random.seed(42)
weights=np.random.rand(3,1)
bias=np.random.rand(1)
lr=0.05

Step #3: Activation function.

The sigmoid function returns 0.5 when the input is 0. It
returns a value close to 1 if the input is a large positive
number. In the case of negative input, the sigmoid
function outputs a value close to zero.

Sigmoid Function
#Step 03

#methods
def sigmoid(x):  
    return 1/(1+np.exp(x))

def sigmoid_derivation(x):
    return sigmoid(x)*(1-sigmoid(x))

Step #4: Training

In the context of machine learning, an epoch is one complete pass through the training data. A deep neural network has to be trained for multiple epochs.

#Step 04

#training our model

for epoch in range(10):
    inputs =training_set
    XW=np.dot(inputs,weights)+bias
    print(epoch)
    z=sigmoid(XW)
    error=z-labels
    print(error.sum())
    dcost=error
    dpred=sigmoid_derivation(z)
    z_del=dcost * dpred
    inputs= training_set.T
    weights = weights - lr* np.dot(inputs,z_del)
    for num in z_del:
        bias=bias-lr*num
        
inputs = training_set        
    
Neural Network Architecture

Step #5: Outcomes.

In the first case the output (result) is closer to 0, so will be classified as 0. Second one has the value closer to 1 , so will be classified as 1.

#Step 05

# single_pt=np.array([1,0,0])
# result=sigmoid(np.dot(single_pt,weights)+bias)
# print(result)   

single_pt=np.array([0,1,0])
result=sigmoid(np.dot(single_pt,weights)+bias)
print(result)  

Share this post

Leave a Comment

Your email address will not be published. Required fields are marked *