This is part of DeepLearning.AI's specialization in Deep Learning. Use the document to review procedures used in setting up features, weights, and biases for forward propagation, backward propagation, gradient descent, and parameter updates. Uses ReLu activation functions for L-1 layers and Sigmoid activation for the output layer. This example assumes only 1 output neuron.
-
Notifications
You must be signed in to change notification settings - Fork 0
This document develops some first principles required to build an L-deep Neural Network by building functions that perform forward propagation, backward propagation, gradient descent, and parameter update.
escalante-cr/Building_L_deep_NN_from_scratch
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
This document develops some first principles required to build an L-deep Neural Network by building functions that perform forward propagation, backward propagation, gradient descent, and parameter update.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published