Skip to content

Commit

Permalink
Docs: rename Evaluation to Evaluate
Browse files Browse the repository at this point in the history
  • Loading branch information
markotoplak committed Sep 24, 2019
1 parent a20bda7 commit faf029d
Show file tree
Hide file tree
Showing 54 changed files with 45 additions and 45 deletions.
16 changes: 8 additions & 8 deletions doc/visual-programming/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -128,18 +128,18 @@ Unsupervised
widgets/unsupervised/manifoldlearning


Evaluation
----------
Evaluate
--------

.. toctree::
:maxdepth: 1

widgets/evaluation/calibrationplot
widgets/evaluation/confusionmatrix
widgets/evaluation/liftcurve
widgets/evaluation/predictions
widgets/evaluation/rocanalysis
widgets/evaluation/testandscore
widgets/evaluate/calibrationplot
widgets/evaluate/confusionmatrix
widgets/evaluate/liftcurve
widgets/evaluate/predictions
widgets/evaluate/rocanalysis
widgets/evaluate/testandscore


.. toctree::
Expand Down
2 changes: 1 addition & 1 deletion doc/visual-programming/source/widgets/data/datasampler.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ First, let's see how the **Data Sampler** works. We will use the *iris* data fro

Now, we will use the **Data Sampler** to split the data into training and testing part. We are using the *iris* data, which we loaded with the [File](../data/file.md) widget. In **Data Sampler**, we split the data with *Fixed proportion of data*, keeping 70% of data instances in the sample.

Then we connected two outputs to the [Test & Score](../evaluation/testandscore.md) widget, *Data Sample --> Data* and *Remaining Data --> Test Data*. Finally, we added [Logistic Regression](../model/logisticregression.md) as the learner. This runs logistic regression on the Data input and evaluates the results on the Test Data.
Then we connected two outputs to the [Test & Score](../evaluate/testandscore.md) widget, *Data Sample --> Data* and *Remaining Data --> Test Data*. Finally, we added [Logistic Regression](../model/logisticregression.md) as the learner. This runs logistic regression on the Data input and evaluates the results on the Test Data.

![](images/DataSampler-Example2.png)

Expand Down
4 changes: 2 additions & 2 deletions doc/visual-programming/source/widgets/data/preprocess.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,6 @@ Then we will send the *Data Sample* into [Preprocess](../data/preprocess.md). We

Finally, **Predictions** also needs the data to predict on. We will use the output of [Data Sampler](../data/datasampler.md) for prediction, but this time not the *Data Sample*, but the *Remaining Data*, this is the data that wasn't used for training the model.

Notice how we send the remaining data directly to **Predictions** without applying any preprocessing. This is because Orange handles preprocessing on new data internally to prevent any errors in the model construction. The exact same preprocessor that was used on the training data will be used for predictions. The same process applies to [Test & Score](../evaluation/testandscore.md).
Notice how we send the remaining data directly to **Predictions** without applying any preprocessing. This is because Orange handles preprocessing on new data internally to prevent any errors in the model construction. The exact same preprocessor that was used on the training data will be used for predictions. The same process applies to [Test & Score](../evaluate/testandscore.md).

![](../evaluation/images/Predictions-Example2.png)
![](../evaluate/images/Predictions-Example2.png)
2 changes: 1 addition & 1 deletion doc/visual-programming/source/widgets/data/rank.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ Notice how the widget outputs a dataset that includes only the best-scored attri
Example: Feature Subset Selection for Machine Learning
------------------------------------------------------

What follows is a bit more complicated example. In the workflow below, we first split the data into a training set and a test set. In the upper branch, the training data passes through the **Rank** widget to select the most informative attributes, while in the lower branch there is no feature selection. Both feature selected and original datasets are passed to their own [Test & Score](../evaluation/testandscore.md) widgets, which develop a *Naive Bayes* classifier and score it on a test set.
What follows is a bit more complicated example. In the workflow below, we first split the data into a training set and a test set. In the upper branch, the training data passes through the **Rank** widget to select the most informative attributes, while in the lower branch there is no feature selection. Both feature selected and original datasets are passed to their own [Test & Score](../evaluate/testandscore.md) widgets, which develop a *Naive Bayes* classifier and score it on a test set.

![](images/Rank-and-Test.png)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,8 +20,8 @@ The [Calibration Plot](https://en.wikipedia.org/wiki/Calibration_curve)plots cla
Example
-------

At the moment, the only widget which gives the right type of signal needed by the **Calibration Plot** is [Test & Score](../evaluation/testandscore.md). The Calibration Plot will hence always follow Test & Score and, since it has no outputs, no other widgets follow it.
At the moment, the only widget which gives the right type of signal needed by the **Calibration Plot** is [Test & Score](../evaluate/testandscore.md). The Calibration Plot will hence always follow Test & Score and, since it has no outputs, no other widgets follow it.

Here is a typical example, where we compare three classifiers (namely [Naive Bayes](../model/naivebayes.md), [Tree](../model/tree.md) and [Constant](../model/constant.md)) and input them into [Test & Score](../evaluation/testandscore.md). We used the *Titanic* dataset. Test & Score then displays evaluation results for each classifier. Then we draw **Calibration Plot** and [ROC Analysis](../evaluation/rocanalysis.md) widgets from Test & Score to further analyze the performance of classifiers. **Calibration Plot** enables you to see prediction accuracy of class probabilities in a plot.
Here is a typical example, where we compare three classifiers (namely [Naive Bayes](../model/naivebayes.md), [Tree](../model/tree.md) and [Constant](../model/constant.md)) and input them into [Test & Score](../evaluate/testandscore.md). We used the *Titanic* dataset. Test & Score then displays evaluation results for each classifier. Then we draw **Calibration Plot** and [ROC Analysis](../evaluate/rocanalysis.md) widgets from Test & Score to further analyze the performance of classifiers. **Calibration Plot** enables you to see prediction accuracy of class probabilities in a plot.

![](images/CalibrationPlot-example.png)
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Shows proportions between the predicted and actual class.

The [Confusion Matrix](https://en.wikipedia.org/wiki/Confusion_matrix) gives the number/proportion of instances between the predicted and actual class. The selection of the elements in the matrix feeds the corresponding instances into the output signal. This way, one can observe which specific instances were misclassified and how.

The widget usually gets the evaluation results from [Test & Score](../evaluation/testandscore.md); an example of the schema is shown below.
The widget usually gets the evaluation results from [Test & Score](../evaluate/testandscore.md); an example of the schema is shown below.

![](images/ConfusionMatrix-stamped.png)

Expand All @@ -41,7 +41,7 @@ The following workflow demonstrates what this widget can be used for.

![](images/ConfusionMatrix-Schema.png)

[Test & Score](../evaluation/testandscore.md) gets the data from [File](../data/file.md) and two learning algorithms from [Naive Bayes](../model/naivebayes.md) and [Tree](../model/tree.md). It performs cross-validation or some other train-and-test procedures to get class predictions by both algorithms for all (or some) data instances. The test results are fed into the **Confusion Matrix**, where we can observe how many instances were misclassified and in which way.
[Test & Score](../evaluate/testandscore.md) gets the data from [File](../data/file.md) and two learning algorithms from [Naive Bayes](../model/naivebayes.md) and [Tree](../model/tree.md). It performs cross-validation or some other train-and-test procedures to get class predictions by both algorithms for all (or some) data instances. The test results are fed into the **Confusion Matrix**, where we can observe how many instances were misclassified and in which way.

In the output, we used [Data Table](../data/datatable.md) to show the instances we selected in the confusion matrix. If we, for instance, click *Misclassified*, the table will contain all instances which were misclassified by the selected method.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ classes are guessed correctly and then run straight along 1 on y-axis to
Example
-------

At the moment, the only widget which gives the right type of the signal needed by the **Lift Curve** is [Test & Score](../evaluation/testandscore.md).
At the moment, the only widget which gives the right type of the signal needed by the **Lift Curve** is [Test & Score](../evaluate/testandscore.md).

In the example below, we try to see the prediction quality for the class 'survived' on the *Titanic* dataset. We compared three different classifiers in the Test Learners widget and sent them to Lift Curve to see their performance against a random model. We see the [Tree](../model/tree.md) classifier is the best out of the three, since it best aligns with *lift convex hull*. We also see that its performance is the best for the first 30% of the population (in order of descending probability), which we can set as the threshold for optimal classification.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ The widget receives a dataset and one or more predictors (predictive models, not
4. Select the desired output.
5. Predictions.

The widget show the probabilities and final decisions of [predictive models](https://en.wikipedia.org/wiki/Predictive_modelling). The output of the widget is another dataset, where predictions are appended as new meta attributes. You can select which features you wish to output (original data, predictions, probabilities). The result can be observed in a [Data Table](../data/datatable.md). If the predicted data includes true class values, the result of prediction can also be observed in a [Confusion Matrix](../evaluation/confusionmatrix.md).
The widget show the probabilities and final decisions of [predictive models](https://en.wikipedia.org/wiki/Predictive_modelling). The output of the widget is another dataset, where predictions are appended as new meta attributes. You can select which features you wish to output (original data, predictions, probabilities). The result can be observed in a [Data Table](../data/datatable.md). If the predicted data includes true class values, the result of prediction can also be observed in a [Confusion Matrix](../evaluate/confusionmatrix.md).

Examples
--------
Expand All @@ -32,11 +32,11 @@ In the first example, we will use *Attrition - Train* data from the [Datasets](.

For predictions we need both the training data, which we have loaded in the first **Datasets** widget and the data to predict, which we will load in another [Datasets](../data/datasets.md) widget. We will use *Attrition - Predict* data this time. Connect the second data set to **Predictions**. Now we can see predictions for the three data instances from the second data set.

The [Tree](../model/tree.md) model predicts none of the employees will leave the company. You can try other model and see if predictions change. Or test the predictive scores first in the [Test & Score](../evaluation/testandscore.md) widget.
The [Tree](../model/tree.md) model predicts none of the employees will leave the company. You can try other model and see if predictions change. Or test the predictive scores first in the [Test & Score](../evaluate/testandscore.md) widget.

![](images/Predictions-Example1.png)

In the second example, we will see how to properly use [Preprocess](../data/preprocess.md) with **Predictions** or [Test & Score](../evaluation/testandscore.md).
In the second example, we will see how to properly use [Preprocess](../data/preprocess.md) with **Predictions** or [Test & Score](../evaluate/testandscore.md).

This time we are using the *heart disease.tab* data from the [File](../data/file.md) widget. You can access the data through the dropdown menu. This is a dataset with 303 patients that came to the doctor suffering from a chest pain. After the tests were done, some patients were found to have diameter narrowing and others did not (this is our class variable).

Expand All @@ -46,6 +46,6 @@ Then we will send the *Data Sample* into [Preprocess](../data/preprocess.md). We

Finally, **Predictions** also needs the data to predict on. We will use the output of [Data Sampler](../data/datasampler.md) for prediction, but this time not the *Data Sample*, but the *Remaining Data*, this is the data that wasn't used for training the model.

Notice how we send the remaining data directly to **Predictions** without applying any preprocessing. This is because Orange handles preprocessing on new data internally to prevent any errors in the model construction. The exact same preprocessor that was used on the training data will be used for predictions. The same process applies to [Test & Score](../evaluation/testandscore.md).
Notice how we send the remaining data directly to **Predictions** without applying any preprocessing. This is because Orange handles preprocessing on new data internally to prevent any errors in the model construction. The exact same preprocessor that was used on the training data will be used for predictions. The same process applies to [Test & Score](../evaluate/testandscore.md).

![](images/Predictions-Example2.png)
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,6 @@ The diagonal dotted line represents the behavior of a random classifier. The ful
Example
-------

At the moment, the only widget which gives the right type of signal needed by the **ROC Analysis** is [Test & Score](../evaluation/testandscore.md). Below, we compare two classifiers, namely [Tree](../model/tree.md) and [Naive Bayes](../model/naivebayes.md), in **Test\&Score** and then compare their performance in **ROC Analysis**, [Life Curve](../evaluation/liftcurve.md) and [Calibration Plot](../evaluation/calibrationplot.md).
At the moment, the only widget which gives the right type of signal needed by the **ROC Analysis** is [Test & Score](../evaluate/testandscore.md). Below, we compare two classifiers, namely [Tree](../model/tree.md) and [Naive Bayes](../model/naivebayes.md), in **Test\&Score** and then compare their performance in **ROC Analysis**, [Life Curve](../evaluate/liftcurve.md) and [Calibration Plot](../evaluate/calibrationplot.md).

![](images/ROCAnalysis-example.png)
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Tests learning algorithms on data.

- Evaluation Results: results of testing classification algorithms

The widget tests learning algorithms. Different sampling schemes are available, including using separate test data. The widget does two things. First, it shows a table with different classifier performance measures, such as [classification accuracy](https://en.wikipedia.org/wiki/Accuracy_and_precision) and [area under the curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve). Second, it outputs evaluation results, which can be used by other widgets for analyzing the performance of classifiers, such as [ROC Analysis](../evaluation/rocanalysis.md) or [Confusion Matrix](../evaluation/confusionmatrix.md).
The widget tests learning algorithms. Different sampling schemes are available, including using separate test data. The widget does two things. First, it shows a table with different classifier performance measures, such as [classification accuracy](https://en.wikipedia.org/wiki/Accuracy_and_precision) and [area under the curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve). Second, it outputs evaluation results, which can be used by other widgets for analyzing the performance of classifiers, such as [ROC Analysis](../evaluate/rocanalysis.md) or [Confusion Matrix](../evaluate/confusionmatrix.md).

The *Learner* signal has an uncommon property: it can be connected to more than one widget to test multiple learners with the same procedures.

Expand Down Expand Up @@ -45,8 +45,8 @@ The *Learner* signal has an uncommon property: it can be connected to more than
Example
-------

In a typical use of the widget, we give it a dataset and a few learning algorithms and we observe their performance in the table inside the **Test & Score** widget and in the [ROC](../evaluation/rocanalysis.md). The data is often preprocessed before testing; in this case we did some manual feature selection ([Select Columns](../data/selectcolumns.md) widget) on *Titanic* dataset, where we want to know only the sex and status of the survived and omit the age.
In a typical use of the widget, we give it a dataset and a few learning algorithms and we observe their performance in the table inside the **Test & Score** widget and in the [ROC](../evaluate/rocanalysis.md). The data is often preprocessed before testing; in this case we did some manual feature selection ([Select Columns](../data/selectcolumns.md) widget) on *Titanic* dataset, where we want to know only the sex and status of the survived and omit the age.

![](images/TestLearners-example-classification.png)

Another example of using this widget is presented in the documentation for the [Confusion Matrix](../evaluation/confusionmatrix.md) widget.
Another example of using this widget is presented in the documentation for the [Confusion Matrix](../evaluate/confusionmatrix.md) widget.
4 changes: 2 additions & 2 deletions doc/visual-programming/source/widgets/model/adaboost.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,10 +34,10 @@ The [AdaBoost](https://en.wikipedia.org/wiki/AdaBoost) (short for "Adaptive boos
Examples
--------

For classification, we loaded the *iris* dataset. We used *AdaBoost*, [Tree](../model/tree.md) and [Logistic Regression](../model/logisticregression.md) and evaluated the models' performance in [Test & Score](../evaluation/testandscore.md).
For classification, we loaded the *iris* dataset. We used *AdaBoost*, [Tree](../model/tree.md) and [Logistic Regression](../model/logisticregression.md) and evaluated the models' performance in [Test & Score](../evaluate/testandscore.md).

![](images/AdaBoost-classification.png)

For regression, we loaded the *housing* dataset, sent the data instances to two different models (**AdaBoost** and [Tree](../model/tree.md)) and output them to the [Predictions](../evaluation/predictions.md) widget.
For regression, we loaded the *housing* dataset, sent the data instances to two different models (**AdaBoost** and [Tree](../model/tree.md)) and output them to the [Predictions](../evaluate/predictions.md) widget.

![](images/AdaBoost-regression.png)
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ For the example below, we have used *zoo* dataset and passed it to **CN2 Rule In

![](images/CN2-visualize.png)

The second workflow tests evaluates **CN2 Rule Induction** and [Tree](../model/tree.md) in [Test & Score](../evaluation/testandscore.md).
The second workflow tests evaluates **CN2 Rule Induction** and [Tree](../model/tree.md) in [Test & Score](../evaluate/testandscore.md).

![](images/CN2-classification.png)

Expand Down
Loading

0 comments on commit faf029d

Please sign in to comment.