Implementing a basic neural network from scratch and comparing it with Scikit-Learn's MLP Regressor.
Study 1: Basic performance comparison In this study we compare basic stochastic gradient descent with a similar implementation of Sklearn. Sklearn outperforms due to inherent optimizations in implementation.
Study 2: Comparison of dataset size against performance Sklearn takes more time due to persistent search for optimum solution.
Study parameters:
- Sigmoid activation function
- One input layer with three features
- One hidden layer with four neurons
- One output layer (regression task)