diff --git a/completed_tutorials/02-Calcium Imaging Imported Tables.ipynb b/completed_tutorials/02-Calcium Imaging Imported Tables.ipynb index 5b9734d..18b685f 100644 --- a/completed_tutorials/02-Calcium Imaging Imported Tables.ipynb +++ b/completed_tutorials/02-Calcium Imaging Imported Tables.ipynb @@ -732,7 +732,7 @@ "source": [ "In DataJoint, the tier of the table indicates **the nature of the data and the data source for the table**. So far we have encountered two table tiers: `Manual` and `Imported`, and we will encounter the two other major tiers in this session. \n", "\n", - "DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beggining of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n", + "DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beginning of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n", "\n", "On the other hand, **Imported tables** are understood to pull data (or *import* data) from external data files, and come equipped with functionalities to perform this importing process automatically, as we will see shortly! In the Diagram, `Imported` tables are depicted by blue ellipses." ] diff --git a/completed_tutorials/03-Calcium Imaging Computed Tables.ipynb b/completed_tutorials/03-Calcium Imaging Computed Tables.ipynb index 4436773..26a181a 100644 --- a/completed_tutorials/03-Calcium Imaging Computed Tables.ipynb +++ b/completed_tutorials/03-Calcium Imaging Computed Tables.ipynb @@ -770,7 +770,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "There are multiple ways to perform the segementation. To keep it simple, we just detect the cells by setting up the threshold on the average image." + "There are multiple ways to perform the segmentation. To keep it simple, we just detect the cells by setting up the threshold on the average image." ] }, { @@ -1027,7 +1027,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We would like to perform the segmentation for a **combination** of `AverageFrame`s and different set of paremeters of `threshold` and `size_cutoff` values. To do this while still taking advantage of the `make` and `populate` logic, you would want to define a table to house parameters for segmentation in a `Lookup` table!" + "We would like to perform the segmentation for a **combination** of `AverageFrame`s and different set of parameters of `threshold` and `size_cutoff` values. To do this while still taking advantage of the `make` and `populate` logic, you would want to define a table to house parameters for segmentation in a `Lookup` table!" ] }, { @@ -1160,7 +1160,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The `Computed` table is labeled as a pink oval and the `Part` table is bare text. We see that `Segmentation` is a `Computed` table that depends on **both AverageFrame and SegmentationParam**. Finally, let's go ahead and implement the `make` method for the `Segmenation` table. " + "The `Computed` table is labeled as a pink oval and the `Part` table is bare text. We see that `Segmentation` is a `Computed` table that depends on **both AverageFrame and SegmentationParam**. Finally, let's go ahead and implement the `make` method for the `Segmentation` table. " ] }, { @@ -1342,7 +1342,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "And for the part table `Segmenation.Roi`, there was an additional primary key attribute `roi_idx`:`" + "And for the part table `Segmentation.Roi`, there was an additional primary key attribute `roi_idx`:`" ] }, { @@ -1721,7 +1721,7 @@ } ], "source": [ - "# ENTER YOUR CODE! - populate the Segmenation table for real!\n", + "# ENTER YOUR CODE! - populate the Segmentation table for real!\n", "Segmentation.populate()" ] }, @@ -2177,7 +2177,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We can simply delete the unwanted paramter from the `SegmentationParam` table, and let DataJoint cascade the deletion:" + "We can simply delete the unwanted parameter from the `SegmentationParam` table, and let DataJoint cascade the deletion:" ] }, { diff --git a/completed_tutorials/04-Electrophysiology Imported Tables.ipynb b/completed_tutorials/04-Electrophysiology Imported Tables.ipynb index daf90f3..a52528b 100644 --- a/completed_tutorials/04-Electrophysiology Imported Tables.ipynb +++ b/completed_tutorials/04-Electrophysiology Imported Tables.ipynb @@ -523,7 +523,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Let's take the first key, and generate the file name that corresponds to this session. Remember the `data_{mouse_id}_{session_date}.npy` filename convetion!" + "Let's take the first key, and generate the file name that corresponds to this session. Remember the `data_{mouse_id}_{session_date}.npy` filename convention!" ] }, { @@ -977,7 +977,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "So this particular file contains a NumPy array of size 1 x 1000. This represents a (simulated) recording of raw electric activity from neuron(s) (1st dimension) over 1000 time bins (2nd dimesion)." + "So this particular file contains a NumPy array of size 1 x 1000. This represents a (simulated) recording of raw electric activity from neuron(s) (1st dimension) over 1000 time bins (2nd dimension)." ] }, { @@ -1068,7 +1068,7 @@ "source": [ "In DataJoint, the tier of the table indicates **the nature of the data and the data source for the table**. So far we have encountered two table tiers: `Manual` and `Imported`, and we will encounter the two other major tiers in this session. \n", "\n", - "DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beggining of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n", + "DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beginning of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n", "\n", "On the other hand, **Imported tables** are understood to pull data (or *import* data) from external data files, and come equipped with functionalities to perform this importing process automatically, as we will see shortly! In the Diagram, `Imported` tables are depicted by blue ellipses." ] diff --git a/completed_tutorials/05-Electrophysiology Computed Tables.ipynb b/completed_tutorials/05-Electrophysiology Computed Tables.ipynb index 824043d..7fa91ef 100644 --- a/completed_tutorials/05-Electrophysiology Computed Tables.ipynb +++ b/completed_tutorials/05-Electrophysiology Computed Tables.ipynb @@ -3333,7 +3333,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We can simply delete the unwanted paramter from the `SpikeDetectionParam` table, and let DataJoint cascade the deletion:" + "We can simply delete the unwanted parameter from the `SpikeDetectionParam` table, and let DataJoint cascade the deletion:" ] }, { diff --git a/short_tutorials/University.ipynb b/short_tutorials/University.ipynb index 7518e79..5d2fcf2 100644 --- a/short_tutorials/University.ipynb +++ b/short_tutorials/University.ipynb @@ -490,7 +490,7 @@ "metadata": {}, "outputs": [], "source": [ - "# Millenials\n", + "# Millennials\n", "millennials = Student & 'date_of_birth between \"1981-01-01\" and \"1996-12-31\"'" ] }, @@ -519,7 +519,7 @@ "metadata": {}, "outputs": [], "source": [ - "# Millenials who have never enrolled\n", + "# Millennials who have never enrolled\n", "millennials - Enroll" ] }, diff --git a/tutorials/01-DataJoint Basics.ipynb b/tutorials/01-DataJoint Basics.ipynb index 2ef34fb..aabadac 100644 --- a/tutorials/01-DataJoint Basics.ipynb +++ b/tutorials/01-DataJoint Basics.ipynb @@ -51,7 +51,7 @@ "If you visit the [documentation for DataJoint](https://docs.datajoint.io/introduction/Data-pipelines.html), we define a data pipeline as follows:\n", "> A data pipeline is a sequence of steps (more generally a directed acyclic graph) with integrated storage at each step. These steps may be thought of as nodes in a graph.\n", "\n", - "While this is an accurate description, it may not be the most intuitive definition. Put succinctly, a data pipeline is a listing or a \"map\" of various \"things\" that you work with in a project, with line connecting things to each other to indicate their dependecies. The \"things\" in a data pipeline tends to be the *nouns* you find when describing a project. The \"things\" may include anything from mouse, experimenter, equipment, to experiment session, trial, two-photon scans, electric activities, to receptive fields, neuronal spikes, to figures for a publication! A data pipeline gives you a framework to:\n", + "While this is an accurate description, it may not be the most intuitive definition. Put succinctly, a data pipeline is a listing or a \"map\" of various \"things\" that you work with in a project, with line connecting things to each other to indicate their dependencies. The \"things\" in a data pipeline tends to be the *nouns* you find when describing a project. The \"things\" may include anything from mouse, experimenter, equipment, to experiment session, trial, two-photon scans, electric activities, to receptive fields, neuronal spikes, to figures for a publication! A data pipeline gives you a framework to:\n", "\n", "1. define these \"things\" as tables in which you can store the information about them\n", "2. define the relationships (in particular the dependencies) between the \"things\"\n", diff --git a/tutorials/02-Calcium Imaging Imported Tables.ipynb b/tutorials/02-Calcium Imaging Imported Tables.ipynb index 6114d4b..279da47 100644 --- a/tutorials/02-Calcium Imaging Imported Tables.ipynb +++ b/tutorials/02-Calcium Imaging Imported Tables.ipynb @@ -342,7 +342,7 @@ "source": [ "In DataJoint, the tier of the table indicates **the nature of the data and the data source for the table**. So far we have encountered two table tiers: `Manual` and `Imported`, and we will encounter the two other major tiers in this session. \n", "\n", - "DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beggining of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n", + "DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beginning of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n", "\n", "On the other hand, **Imported tables** are understood to pull data (or *import* data) from external data files, and come equipped with functionalities to perform this importing process automatically, as we will see shortly! In the Diagram, `Imported` tables are depicted by blue ellipses." ] diff --git a/tutorials/03-Calcium Imaging Computed Tables.ipynb b/tutorials/03-Calcium Imaging Computed Tables.ipynb index 9427bd1..f90f78a 100644 --- a/tutorials/03-Calcium Imaging Computed Tables.ipynb +++ b/tutorials/03-Calcium Imaging Computed Tables.ipynb @@ -250,7 +250,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "There are multiple ways to perform the segementation. To keep it simple, we just detect the cells by setting up the threshold on the average image." + "There are multiple ways to perform the segmentation. To keep it simple, we just detect the cells by setting up the threshold on the average image." ] }, { @@ -397,7 +397,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We would like to perform the segmentation for a **combination** of `AverageFrame`s and different set of paremeters of `threshold` and `size_cutoff` values. To do this while still taking advantage of the `make` and `populate` logic, you would want to define a table to house parameters for segmentation in a `Lookup` table!" + "We would like to perform the segmentation for a **combination** of `AverageFrame`s and different set of parameters of `threshold` and `size_cutoff` values. To do this while still taking advantage of the `make` and `populate` logic, you would want to define a table to house parameters for segmentation in a `Lookup` table!" ] }, { @@ -506,7 +506,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "The `Computed` table is labeled as a pink oval and the `Part` table is bare text. We see that `Segmentation` is a `Computed` table that depends on **both AverageFrame and SegmentationParam**. Finally, let's go ahead and implement the `make` method for the `Segmenation` table. " + "The `Computed` table is labeled as a pink oval and the `Part` table is bare text. We see that `Segmentation` is a `Computed` table that depends on **both AverageFrame and SegmentationParam**. Finally, let's go ahead and implement the `make` method for the `Segmentation` table. " ] }, { @@ -597,7 +597,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "And for the part table `Segmenation.Roi`, there was an additional primary key attribute `roi_idx`:`" + "And for the part table `Segmentation.Roi`, there was an additional primary key attribute `roi_idx`:`" ] }, { @@ -693,7 +693,7 @@ "metadata": {}, "outputs": [], "source": [ - "# ENTER YOUR CODE! - populate the Segmenation table for real!\n" + "# ENTER YOUR CODE! - populate the Segmentation table for real!\n" ] }, { @@ -804,7 +804,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We can simply delete the unwanted paramter from the `SegmentationParam` table, and let DataJoint cascade the deletion:" + "We can simply delete the unwanted parameter from the `SegmentationParam` table, and let DataJoint cascade the deletion:" ] }, { diff --git a/tutorials/04-Electrophysiology Imported Tables.ipynb b/tutorials/04-Electrophysiology Imported Tables.ipynb index a32834e..41823b7 100644 --- a/tutorials/04-Electrophysiology Imported Tables.ipynb +++ b/tutorials/04-Electrophysiology Imported Tables.ipynb @@ -190,7 +190,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Let's take the first key, and generate the file name that corresponds to this session. Remember the `data_{mouse_id}_{session_date}.npy` filename convetion!" + "Let's take the first key, and generate the file name that corresponds to this session. Remember the `data_{mouse_id}_{session_date}.npy` filename convention!" ] }, { @@ -267,7 +267,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "So this particular file contains a NumPy array of size 1 x 1000. This represents a (simulated) recording of raw electric activity from neuron(s) (1st dimension) over 1000 time bins (2nd dimesion)." + "So this particular file contains a NumPy array of size 1 x 1000. This represents a (simulated) recording of raw electric activity from neuron(s) (1st dimension) over 1000 time bins (2nd dimension)." ] }, { @@ -345,7 +345,7 @@ "source": [ "In DataJoint, the tier of the table indicates **the nature of the data and the data source for the table**. So far we have encountered two table tiers: `Manual` and `Imported`, and we will encounter the two other major tiers in this session. \n", "\n", - "DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beggining of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n", + "DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beginning of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n", "\n", "On the other hand, **Imported tables** are understood to pull data (or *import* data) from external data files, and come equipped with functionalities to perform this importing process automatically, as we will see shortly! In the Diagram, `Imported` tables are depicted by blue ellipses." ] diff --git a/tutorials/05-Electrophysiology Computed Tables.ipynb b/tutorials/05-Electrophysiology Computed Tables.ipynb index b462cf6..06d9fd9 100644 --- a/tutorials/05-Electrophysiology Computed Tables.ipynb +++ b/tutorials/05-Electrophysiology Computed Tables.ipynb @@ -1030,7 +1030,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "We can simply delete the unwanted paramter from the `SpikeDetectionParam` table, and let DataJoint cascade the deletion:" + "We can simply delete the unwanted parameter from the `SpikeDetectionParam` table, and let DataJoint cascade the deletion:" ] }, {