diff --git a/Decisiontree.ipynb b/Decisiontree.ipynb
new file mode 100644
index 00000000..d96ca39c
--- /dev/null
+++ b/Decisiontree.ipynb
@@ -0,0 +1,75 @@
+{
+ "nbformat": 4,
+ "nbformat_minor": 0,
+ "metadata": {
+ "colab": {
+ "provenance": [],
+ "authorship_tag": "ABX9TyNhpBaOnO+A7H31urYsJYIl",
+ "include_colab_link": true
+ },
+ "kernelspec": {
+ "name": "python3",
+ "display_name": "Python 3"
+ },
+ "language_info": {
+ "name": "python"
+ }
+ },
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "view-in-github",
+ "colab_type": "text"
+ },
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "## `Decision tree`\n",
+ "\n",
+ "A decision tree is a supervised machine learning algorithm used for both classification and regression tasks. It is a popular and intuitive model that resembles a flowchart or tree-like structure. Decision trees are used to make decisions or predictions by recursively splitting the dataset into subsets based on the most significant attributes or features.\n",
+ "\n",
+ "Here's how a decision tree works:\n",
+ "\n",
+ "1. **Root Node**: At the beginning, you have a single node called the \"root node\" that represents the entire dataset.\n",
+ "\n",
+ "2. **Splitting**: The decision tree algorithm evaluates different attributes (features) and selects the one that, when used as a decision criterion, results in the best separation of the data into subsets. This process is repeated at each internal node of the tree.\n",
+ "\n",
+ "3. **Internal Nodes**: Internal nodes in the tree represent decisions or conditions based on the chosen attribute. For example, if you were building a decision tree to classify animals, an internal node might ask, \"Is it a mammal?\"\n",
+ "\n",
+ "4. **Branches**: Each branch emerging from an internal node represents one of the possible outcomes or values for that attribute. For instance, if the decision is based on whether an animal is a mammal, you would have two branches: \"Yes\" and \"No.\"\n",
+ "\n",
+ "5. **Leaves (Terminal Nodes)**: The terminal nodes, often referred to as \"leaves,\" represent the final classification or prediction. In the animal example, the leaves might specify the animal's species or group.\n",
+ "\n",
+ "6. **Recursive Process**: The process of splitting the data into subsets and creating new internal nodes continues recursively until certain stopping criteria are met. These criteria might include a maximum depth for the tree, a minimum number of samples in a leaf node, or other factors.\n",
+ "\n",
+ "7. **Predictions**: To make a prediction or classification for a new data point, you traverse the tree from the root node down to a leaf node, following the path that corresponds to the values of the attributes for that data point. The class or value associated with the leaf node is the final prediction.\n",
+ "\n",
+ "Decision trees have several advantages, including simplicity and interpretability, as they closely resemble human decision-making processes. However, they can be prone to overfitting when they become too complex. To address this issue, techniques like pruning and using ensemble methods such as Random Forests are often employed. Decision trees are widely used in various fields, including finance, healthcare, and natural language processing, due to their versatility and ease of use."
+ ],
+ "metadata": {
+ "id": "9FKWmKzajZnG"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [],
+ "metadata": {
+ "id": "7HyaE7cyjWok"
+ },
+ "execution_count": null,
+ "outputs": []
+ },
+ {
+ "cell_type": "markdown",
+ "source": [],
+ "metadata": {
+ "id": "qSksntIZjY8N"
+ }
+ }
+ ]
+}
\ No newline at end of file
diff --git a/README.md b/README.md
index 69ada676..a4290051 100644
--- a/README.md
+++ b/README.md
@@ -67,6 +67,13 @@ Thanks goes to these wonderful people ([:hugs:](https://allcontributors.org/docs
Dhruv Kotwani
+
+