Skip to content

Commit

Permalink
typo fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
rasbt committed Feb 1, 2016
1 parent a5f2961 commit 974e0ba
Show file tree
Hide file tree
Showing 7 changed files with 19 additions and 113 deletions.
3 changes: 2 additions & 1 deletion docs/sources/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -314,7 +314,8 @@ If you are adding a new document, please also include it in the pages section in
First, please check the documenation via localhost (http://127.0.0.1:8000/):

```bash
~/github/mlxtend/docs$ mkdocs serve```
~/github/mlxtend/docs$ mkdocs serve
```

Next, build the static HTML files of the mlxtend documentation via

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -87,28 +87,6 @@ Sequential Feature Selection for Classification and Regression.
'cv_scores' (list individual cross-validation scores)
'avg_score' (average cross-validation score)

**Examples**

>>> from sklearn.neighbors import KNeighborsClassifier
>>> from sklearn.datasets import load_iris
>>> iris = load_iris()
>>> X = iris.data
>>> y = iris.target
>>> knn = KNeighborsClassifier(n_neighbors=4)
>>> sfs = SequentialFeatureSelector(knn, k_features=2,
... scoring='accuracy', cv=5)
>>> sfs = sfs.fit(X, y)
>>> sfs.indices_
(2, 3)
>>> sfs.transform(X[:5])
array([[ 1.4, 0.2],
[ 1.4, 0.2],
[ 1.3, 0.2],
[ 1.5, 0.2],
[ 1.4, 0.2]])
>>> print('best score: %.2f' % sfs.k_score_)
best score: 0.97

### Methods

<hr>
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## shuffle_arrays_unison

*shuffle_arrays_unison(arrays, random_state=None)*
*shuffle_arrays_unison(arrays, random_seed=None)*

Shuffle NumPy arrays in unison.

Expand All @@ -10,9 +10,9 @@ Shuffle NumPy arrays in unison.

A list of NumPy arrays.

- `random_state` : int (default: None)
- `random_seed` : int (default: None)

Sets the random seed.
Sets the random state.

**Returns**

Expand Down
22 changes: 0 additions & 22 deletions docs/sources/api_subpackages/mlxtend.feature_selection.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,28 +156,6 @@ Sequential Feature Selection for Classification and Regression.
'cv_scores' (list individual cross-validation scores)
'avg_score' (average cross-validation score)

**Examples**

>>> from sklearn.neighbors import KNeighborsClassifier
>>> from sklearn.datasets import load_iris
>>> iris = load_iris()
>>> X = iris.data
>>> y = iris.target
>>> knn = KNeighborsClassifier(n_neighbors=4)
>>> sfs = SequentialFeatureSelector(knn, k_features=2,
... scoring='accuracy', cv=5)
>>> sfs = sfs.fit(X, y)
>>> sfs.indices_
(2, 3)
>>> sfs.transform(X[:5])
array([[ 1.4, 0.2],
[ 1.4, 0.2],
[ 1.3, 0.2],
[ 1.5, 0.2],
[ 1.4, 0.2]])
>>> print('best score: %.2f' % sfs.k_score_)
best score: 0.97

### Methods

<hr>
Expand Down
6 changes: 3 additions & 3 deletions docs/sources/api_subpackages/mlxtend.preprocessing.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ Min max scaling of pandas' DataFrames.

## shuffle_arrays_unison

*shuffle_arrays_unison(arrays, random_state=None)*
*shuffle_arrays_unison(arrays, random_seed=None)*

Shuffle NumPy arrays in unison.

Expand All @@ -143,9 +143,9 @@ Shuffle NumPy arrays in unison.

A list of NumPy arrays.

- `random_state` : int (default: None)
- `random_seed` : int (default: None)

Sets the random seed.
Sets the random state.

**Returns**

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Implementation of of *sequential feature algorithms* (SFAs) -- greedy search algorithm -- that have been developed as a suboptimal solution to the computationally often not feasible exhaustive search."
"Implementation of *sequential feature algorithms* (SFAs) -- greedy search algorithms -- that have been developed as a suboptimal solution to the computationally often not feasible exhaustive search."
]
},
{
Expand All @@ -85,7 +85,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"In a nutshell, SFAs remove or add one feature at the time based on the classifier performance until a feature subset of the desired size k is reached. There are 4 different flavors of SFAs available via the `Sequential Feature Selector`:\n",
"In a nutshell, SFAs remove or add one feature at the time based on the classifier performance until a feature subset of the desired size k is reached. There are 4 different flavors of SFAs available via the `SequentialFeatureSelector`:\n",
"\n",
"1. Sequential Forward Selection (SFS)\n",
"2. Sequential Backward Selection (SBS)\n",
"3. Sequential Floating Forward Selection (SFFS)\n",
Expand All @@ -96,8 +97,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The ***floating*** variants, SFFS and SFBS can be considered as extensions of the simpler SFS and SBS algorithms. the floating algorithms have an additional exclusion or inclusion step to remove features once they were included (or excluded), so that a larger number of feature subset combinations can be sampled. It is important to emphasize that this step is conditional only occurs if the resulting feature subset is assessed as \"better\" by the criterion function after removal (or addition) of a particular feature. Furthermore, I added an optional check to skip the conditional exclusion steps if the algorithm gets stuck in cycles. \n",
"The algorithms are outlined in pseudocode below:"
"The ***floating*** variants, SFFS and SFBS can be considered as extensions to the simpler SFS and SBS algorithms. The floating algorithms have an additional exclusion or inclusion step to remove features once they were included (or excluded), so that a larger number of feature subset combinations can be sampled. It is important to emphasize that this step is conditional and only occurs if the resulting feature subset is assessed as \"better\" by the criterion function after removal (or addition) of a particular feature. Furthermore, I added an optional check to skip the conditional exclusion steps if the algorithm gets stuck in cycles. \n",
"The algorithms are outlined in pseudo code below:"
]
},
{
Expand Down Expand Up @@ -957,7 +958,7 @@
},
{
"cell_type": "code",
"execution_count": 57,
"execution_count": 2,
"metadata": {
"collapsed": false
},
Expand Down Expand Up @@ -1055,28 +1056,6 @@
" 'cv_scores' (list individual cross-validation scores)\n",
" 'avg_score' (average cross-validation score)\n",
"\n",
"**Examples**\n",
"\n",
">>> from sklearn.neighbors import KNeighborsClassifier\n",
" >>> from sklearn.datasets import load_iris\n",
" >>> iris = load_iris()\n",
" >>> X = iris.data\n",
" >>> y = iris.target\n",
" >>> knn = KNeighborsClassifier(n_neighbors=4)\n",
" >>> sfs = SequentialFeatureSelector(knn, k_features=2,\n",
" ... scoring='accuracy', cv=5)\n",
" >>> sfs = sfs.fit(X, y)\n",
" >>> sfs.indices_\n",
" (2, 3)\n",
" >>> sfs.transform(X[:5])\n",
" array([[ 1.4, 0.2],\n",
" [ 1.4, 0.2],\n",
" [ 1.3, 0.2],\n",
" [ 1.5, 0.2],\n",
" [ 1.4, 0.2]])\n",
" >>> print('best score: %.2f' % sfs.k_score_)\n",
" best score: 0.97\n",
"\n",
"### Methods\n",
"\n",
"<hr>\n",
Expand Down Expand Up @@ -1195,15 +1174,6 @@
" s += ''.join(s2[1:])\n",
"print(s)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
Expand Down
Original file line number Diff line number Diff line change
@@ -1,19 +1,20 @@
# Sequential Feature Selector

Implementation of of *sequential feature algorithms* (SFAs) -- greedy search algorithm -- that have been developed as a suboptimal solution to the computationally often not feasible exhaustive search.
Implementation of *sequential feature algorithms* (SFAs) -- greedy search algorithms -- that have been developed as a suboptimal solution to the computationally often not feasible exhaustive search.

> from mlxtend.feature_selection import SequentialFeatureSelector
# Overview

In a nutshell, SFAs remove or add one feature at the time based on the classifier performance until a feature subset of the desired size k is reached. There are 4 different flavors of SFAs available via the `Sequential Feature Selector`:
In a nutshell, SFAs remove or add one feature at the time based on the classifier performance until a feature subset of the desired size k is reached. There are 4 different flavors of SFAs available via the `SequentialFeatureSelector`:

1. Sequential Forward Selection (SFS)
2. Sequential Backward Selection (SBS)
3. Sequential Floating Forward Selection (SFFS)
4. Sequential Floating Backward Selection (SFBS)

The ***floating*** variants, SFFS and SFBS can be considered as extensions of the simpler SFS and SBS algorithms. the floating algorithms have an additional exclusion or inclusion step to remove features once they were included (or excluded), so that a larger number of feature subset combinations can be sampled. It is important to emphasize that this step is conditional only occurs if the resulting feature subset is assessed as "better" by the criterion function after removal (or addition) of a particular feature. Furthermore, I added an optional check to skip the conditional exclusion steps if the algorithm gets stuck in cycles.
The algorithms are outlined in pseudocode below:
The ***floating*** variants, SFFS and SFBS can be considered as extensions to the simpler SFS and SBS algorithms. The floating algorithms have an additional exclusion or inclusion step to remove features once they were included (or excluded), so that a larger number of feature subset combinations can be sampled. It is important to emphasize that this step is conditional and only occurs if the resulting feature subset is assessed as "better" by the criterion function after removal (or addition) of a particular feature. Furthermore, I added an optional check to skip the conditional exclusion steps if the algorithm gets stuck in cycles.
The algorithms are outlined in pseudo code below:

## Sequential Forward Selection (SFS)

Expand Down Expand Up @@ -676,28 +677,6 @@ Sequential Feature Selection for Classification and Regression.
'cv_scores' (list individual cross-validation scores)
'avg_score' (average cross-validation score)

**Examples**

>>> from sklearn.neighbors import KNeighborsClassifier
>>> from sklearn.datasets import load_iris
>>> iris = load_iris()
>>> X = iris.data
>>> y = iris.target
>>> knn = KNeighborsClassifier(n_neighbors=4)
>>> sfs = SequentialFeatureSelector(knn, k_features=2,
... scoring='accuracy', cv=5)
>>> sfs = sfs.fit(X, y)
>>> sfs.indices_
(2, 3)
>>> sfs.transform(X[:5])
array([[ 1.4, 0.2],
[ 1.4, 0.2],
[ 1.3, 0.2],
[ 1.5, 0.2],
[ 1.4, 0.2]])
>>> print('best score: %.2f' % sfs.k_score_)
best score: 0.97

### Methods

<hr>
Expand Down

0 comments on commit 974e0ba

Please sign in to comment.