diff --git a/classification/evaluation/evaluation-end2end.ipynb b/classification/evaluation/evaluation-end2end.ipynb index a709b08..ce96d60 100644 --- a/classification/evaluation/evaluation-end2end.ipynb +++ b/classification/evaluation/evaluation-end2end.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "b1e57c8a", + "id": "8afbd5e3", "metadata": {}, "source": [ "# Table of contents\n", @@ -26,7 +26,7 @@ }, { "cell_type": "markdown", - "id": "12921db4", + "id": "a6143564", "metadata": {}, "source": [ "## Introduction \n", @@ -49,7 +49,7 @@ }, { "cell_type": "markdown", - "id": "d46bd91d", + "id": "bafcbf96", "metadata": {}, "source": [ "## Aggregate Model Evaluation \n", @@ -102,7 +102,7 @@ }, { "cell_type": "markdown", - "id": "8bd95ca9", + "id": "073ce554", "metadata": {}, "source": [ "If the dataset already exists because it had been saved under the same name before, load the dataset from fiftyone's folder." @@ -111,7 +111,7 @@ { "cell_type": "code", "execution_count": 3, - "id": "d9c393d6", + "id": "8681fc92", "metadata": {}, "outputs": [], "source": [ @@ -121,7 +121,7 @@ }, { "cell_type": "markdown", - "id": "5dd071ae", + "id": "ab97bece", "metadata": {}, "source": [ "### Perform detections \n", @@ -169,7 +169,7 @@ }, { "cell_type": "markdown", - "id": "a9294bf2", + "id": "39ce167e", "metadata": {}, "source": [ "### Save detections \n", @@ -190,7 +190,7 @@ }, { "cell_type": "markdown", - "id": "f75ef7aa", + "id": "0c9f9304", "metadata": {}, "source": [ "### Evaluate detections against ground truth \n", @@ -226,12 +226,12 @@ }, { "cell_type": "markdown", - "id": "d1e5e4b9", + "id": "9e403f93", "metadata": {}, "source": [ "### Calculate results and plot them \n", "\n", - "Now we have the performance of the model saved in the `results` variable and can extract various metrics from that. Here we print a simple report of all classes and their precision and recall values as well as the mAP with the metric employed by COCO. Next, a confusion matrix is plotted for each class (in our case only one). Finally, we can show the precision vs. recall curve for a specified threshold value." + "Now we have the performance of the model saved in the `results` variable and can extract various metrics from that. Here we print a simple report of all classes and their precision and recall values as well as the mAP with the metric employed by [COCO](https://cocodataset.org/#detection-eval). Next, a confusion matrix is plotted for each class (in our case only one). Finally, we can show the precision vs. recall curve for a specified threshold value." ] }, { @@ -505,7 +505,7 @@ }, { "cell_type": "markdown", - "id": "9997cd3f", + "id": "2d48bb3f", "metadata": {}, "source": [ "### View dataset in fiftyone \n", @@ -574,7 +574,7 @@ }, { "cell_type": "markdown", - "id": "64e89754", + "id": "22561d30", "metadata": {}, "source": [ "## YOLO Model Evaluation \n", @@ -584,7 +584,7 @@ }, { "cell_type": "markdown", - "id": "5782d392", + "id": "6f389582", "metadata": {}, "source": [ "### Load OIDv6 \n", @@ -625,7 +625,7 @@ }, { "cell_type": "markdown", - "id": "9bfbf8a4", + "id": "1b509862", "metadata": {}, "source": [ "### Export dataset for conversion \n", @@ -666,7 +666,7 @@ }, { "cell_type": "markdown", - "id": "065e0dca", + "id": "4cbee814", "metadata": {}, "source": [ "### Merge labels into one \n", @@ -683,7 +683,7 @@ }, { "cell_type": "markdown", - "id": "030e4550", + "id": "7edb13a2", "metadata": {}, "source": [ "### Load YOLOv5 dataset \n", @@ -722,7 +722,7 @@ }, { "cell_type": "markdown", - "id": "a9ea9ba1", + "id": "3ab2c225", "metadata": {}, "source": [ "In case the yolo dataset already exists because it had been saved earlier, we can simply load the dataset from fiftyone's database." @@ -731,7 +731,7 @@ { "cell_type": "code", "execution_count": 28, - "id": "42b72a2d", + "id": "0b86639e", "metadata": {}, "outputs": [ { @@ -752,7 +752,7 @@ }, { "cell_type": "markdown", - "id": "2ebffbda", + "id": "9eb7bb84", "metadata": {}, "source": [ "### Perform detections \n", @@ -801,7 +801,7 @@ }, { "cell_type": "markdown", - "id": "d0789cc2", + "id": "24df56d9", "metadata": {}, "source": [ "### Evaluate detections against ground truth \n", @@ -812,7 +812,7 @@ { "cell_type": "code", "execution_count": 29, - "id": "b6b35ed4", + "id": "4aaa4577", "metadata": {}, "outputs": [ { @@ -832,12 +832,12 @@ }, { "cell_type": "markdown", - "id": "124d92a4", + "id": "b0df052d", "metadata": {}, "source": [ "### Calculate results and plot them \n", "\n", - "Now we have the performance of the model saved in the `results` variable and can extract various metrics from that. Here we print a simple report of all classes and their precision and recall values as well as the mAP with the metric employed by COCO. Next, a confusion matrix is plotted for each class (in our case only one). Finally, we can show the precision vs. recall curve for a specified threshold value." + "Now we have the performance of the model saved in the `results` variable and can extract various metrics from that. Here we print a simple report of all classes and their precision and recall values as well as the mAP with the metric employed by [COCO](https://cocodataset.org/#detection-eval). Next, a confusion matrix is plotted for each class (in our case only one). Finally, we can show the precision vs. recall curve for a specified threshold value." ] }, { @@ -1064,7 +1064,7 @@ }, { "cell_type": "markdown", - "id": "cfff898d", + "id": "def95455", "metadata": {}, "source": [ "### View dataset in fiftyone \n",