From ad993ea67a0e5b0ff8f4a768672f6fe8449f1515 Mon Sep 17 00:00:00 2001 From: Jenifer Tabita Ciuciu-Kiss Date: Sun, 27 Oct 2024 05:29:51 +0100 Subject: [PATCH] add the MLModelEvaluation properties --- development/index.html | 90 ++++++++++++++++++++++++++++++++++++++++++ src/template.html | 26 ++++++++++++ 2 files changed, 116 insertions(+) diff --git a/development/index.html b/development/index.html index 9492e3b..ce1e25e 100644 --- a/development/index.html +++ b/development/index.html @@ -300,6 +300,96 @@

fair4ml: MLModel new properties

+

fair4ml: MLModelEvaluation new properties

+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PropertyExpected TypeDescription
fair4ml:hasEvaluation + + fair4ml:MLModelEvaluation
+ +
Relationship to point to the source model used for fine tuning (if this model was fine-tuned from another one).
fair4ml:evaluatedMLModel + + fair4ml:MLModel
+ +
MLModel evaluated with this evaluation. Reverse property: fair4ml:evaluatedWith.
fair4ml:evaluationDataset + + cr:Dataset
+ +
Dataset used for the evaluation.
fair4ml:evaluationMetrics + + schema:Text
+ + schema:PropertyValue
+ +
Description of the metrics used for evaluating the ML model. Example with Text: ["Precision: 0.8", "Mean: 0.9"]. Example with PropertyValue: [{minValue: 0.0, maxValue: 1.0, value: 0.8, measurementTechnique: "Precision"} ...]
fair4ml:evaluationResults + + schema:Text
+ +
Summary of the results from the evaluation.
fair4ml:evaluationSoftware + + schema:SoftwareSourceCode
+ + schema:SoftwareApplication
+ +
Code used to performed the evaluation.
fair4ml:extrinsicEvaluation + + schema:Boolean
+ +
Indicates whether this evaluation is extrinsic, i.e., done with an existing model, outside the model training scope, and with a totally unseen dataset. It could be done by third-parties or by the authors of the model.
+
+