This project aims to teach a virtual car to drive autonomously by mimicking human driving behavior. Using a deep neural network, the model analyzes driving data collected from human drivers and learns to execute driving actions such as steering (for now, the acceleration is not controlled by the model). This project leverages the power of behavioral cloning, a form of supervised learning, to enable machines to learn complex behaviors directly from data.
- Python 3.8+
- PyTorch (torch and torchvision)
- OpenCV
- Udacity Car Simulator
Critical dependencies: These two packages are critical for communicating with the simulator. The exact versions are required.
- python-socketio v4.6.1
- [python-engineio] (https://pypi.org/project/python-engineio/) v3.13
# Clone the repository
git clone
# Install the dependncies
pip install -r requirements.txt
# Start the simulator separately
# After the simulator is started, run the following
python drive_pt.py
See the Acknowledgements section.
There are two rounds of image preprocessing. The first round is exclusive to training data, wherein different transformations (affine, flip, translation, rotation, brightness adjustment) were applied to the training data.
The second transformation involves using the YUV color space instead of the RGB color space, applying a Gaussian blur, and resizing the image. These transformations are common for all data.
The NVIDIA DAVE-2 CNN model architecture was used. Read more about it here.
I got the training data from this repository.