578 lines
28 KiB
Plaintext
578 lines
28 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "8afbd5e3",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Table of contents\n",
|
|
"1. [Introduction](#introduction)\n",
|
|
"2. [YOLO Evaluation](#yoloevaluation)\n",
|
|
" 1. [Load OIDv6](#yololoadoid)\n",
|
|
" 2. [Merge labels into one](#yolomergelabels)\n",
|
|
" 3. [Load YOLOv5 dataset](#yololoadv5)\n",
|
|
" 4. [Perform detections](#yoloperformdetections)\n",
|
|
" 5. [Evaluate detections](#yolodetectionseval)\n",
|
|
" 6. [Calculate results and plot them](#yoloshowresults)\n",
|
|
" 7. [View dataset in fiftyone](#yolofiftyonesession)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "a6143564",
|
|
"metadata": {},
|
|
"source": [
|
|
"## Introduction <a name=\"introduction\"></a>\n",
|
|
"\n",
|
|
"This notebook loads the test dataset in YOLOv5 format from disk and evaluates the object detection model's performance."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"id": "3fe8177c",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stderr",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"/home/zenon/.local/share/miniconda3/lib/python3.7/site-packages/requests/__init__.py:104: RequestsDependencyWarning: urllib3 (1.26.13) or chardet (5.1.0)/charset_normalizer (2.0.4) doesn't match a supported version!\n",
|
|
" RequestsDependencyWarning)\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"import fiftyone as fo\n",
|
|
"from PIL import Image\n",
|
|
"from detection import detect\n",
|
|
"from detection import detect_yolo_only"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "22561d30",
|
|
"metadata": {},
|
|
"source": [
|
|
"## YOLO Model Evaluation <a name=\"yoloevaluation\"></a>\n",
|
|
"\n",
|
|
"In this section we look at the object detection model in detail by evaluating it separately from the classification model. The object detection model was trained on the Open Images Dataset v6 on the two classes _Plant_ and _Houseplant_ which come with the dataset. "
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "6f389582",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Load OIDv6 <a name=\"yololoadoid\"></a>\n",
|
|
"\n",
|
|
"Since we are only interested in evaluating the model, we only load the _test_ split of the dataset. The only classes of interest to us are _Plant_ and _Houseplant_ and we do not want to load keypoint detections or segmentation masks, which is why we specify the `label_types` parameter. There are 9148 images in the test split."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"id": "19c5b271",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Downloading split 'test' to '/home/zenon/fiftyone/open-images-v6/test' if necessary\n",
|
|
"Necessary images already downloaded\n",
|
|
"Existing download of split 'test' is sufficient\n",
|
|
"Loading 'open-images-v6' split 'test'\n",
|
|
" 100% |█████████████| 12106/12106 [1.0m elapsed, 0s remaining, 209.3 samples/s] \n",
|
|
"Dataset 'open-images-v6-test' created\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"import fiftyone as fo\n",
|
|
"import fiftyone.zoo as foz\n",
|
|
"oid = foz.load_zoo_dataset(\n",
|
|
" \"open-images-v6\",\n",
|
|
" split=\"test\",\n",
|
|
" classes=[\"Plant\", \"Houseplant\"],\n",
|
|
" label_types=[\"detections\"],\n",
|
|
" shuffle=True,\n",
|
|
")"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "1b509862",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Export dataset for conversion <a name=\"yoloexportoid\"></a>\n",
|
|
"\n",
|
|
"Unfortunately, the OID dataset does not adhere to the YOLOv5 label format understood by the object detection model. That is why we export the model as a YOLOv5Dataset using fiftyone's converter. The target directory will contain the proper folder structure as well as a `.yaml` file pointing to the images and labels. Take note that the exported files require around 4.2G of space."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 25,
|
|
"id": "ebdde519",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# The directory to which to write the exported dataset\n",
|
|
"import os\n",
|
|
"\n",
|
|
"export_dir = \"/home/zenon/testdir\"\n",
|
|
"\n",
|
|
"# Only export if export_dir doesn't exist already\n",
|
|
"if not os.path.isdir(export_dir):\n",
|
|
" # The name of the sample field containing the label that you wish to export\n",
|
|
" # Used when exporting labeled datasets (e.g., classification or detection)\n",
|
|
" label_field = \"detections\" # for example\n",
|
|
"\n",
|
|
" # The type of dataset to export\n",
|
|
" # Any subclass of `fiftyone.types.Dataset` is supported\n",
|
|
" dataset_type = fo.types.YOLOv5Dataset # for example\n",
|
|
"\n",
|
|
" # Export the dataset\n",
|
|
" oid.export(\n",
|
|
" export_dir=export_dir,\n",
|
|
" dataset_type=dataset_type,\n",
|
|
" label_field=label_field,\n",
|
|
" classes=['Plant', 'Houseplant']\n",
|
|
" )"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "4cbee814",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Merge labels into one <a name=\"yolomergelabels\"></a>\n",
|
|
"\n",
|
|
"The label files contain a 0 at the beginning of each line if the ground truth specifies a plant and a 1 if it specifies a houseplant. We do not care about the distinction between the two and only want to detect plants in general. That means we have to change all 1s at the beginning of each line in each label file into 0s. The YOLOv5 format requires that the labels start at 0 and not at 1, which is why 1s are changed to 0s and not vice-versa. To accomplish this task, we use a simple bash script in the labels directory:\n",
|
|
"```bash\n",
|
|
"for file in `ls test`\n",
|
|
"do\n",
|
|
" sed -i 's/^./0/g' test/$file\n",
|
|
"done\n",
|
|
"```\n",
|
|
"This script calls sed to change the first character in each file to a 0. It performs the conversion in place (`-i` flag). For this script to work, the `val` directories inside `images` and `labels` must be renamed to `test` and the path to the directory changed in the `data.yaml` file. I believe `val` is the wrong name for a test dataset and it should be named accordingly."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "7edb13a2",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Load YOLOv5 dataset <a name=\"yololoadv5\"></a>\n",
|
|
"\n",
|
|
"Now that the labels are in the correct format and we only have one class to deal with, we can import the dataset into the variable `yolo`."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "002ae8fa",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"yolo_dataset_dir = '/home/zenon/testdir'\n",
|
|
"\n",
|
|
"# The type of the dataset being imported\n",
|
|
"dataset_type = fo.types.YOLOv5Dataset\n",
|
|
"\n",
|
|
"# Import the dataset\n",
|
|
"yolo_test = fo.Dataset.from_dir(\n",
|
|
" dataset_dir=yolo_dataset_dir,\n",
|
|
" dataset_type=dataset_type,\n",
|
|
" split='val'\n",
|
|
")\n",
|
|
"yolo_test.name = 'yolo_test4'\n",
|
|
"yolo_test.persistent = True"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "3ab2c225",
|
|
"metadata": {},
|
|
"source": [
|
|
"In case the yolo dataset already exists because it had been saved earlier, we can simply load the dataset from fiftyone's database."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"id": "0b86639e",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"['dataset',\n",
|
|
" 'dataset-new',\n",
|
|
" 'open-images-v6-test',\n",
|
|
" 'plantsdata',\n",
|
|
" 'yolo',\n",
|
|
" 'yolo_test',\n",
|
|
" 'yolo_test2',\n",
|
|
" 'yolo_test3',\n",
|
|
" 'yolo_test4']"
|
|
]
|
|
},
|
|
"execution_count": 3,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"yolo_test = fo.load_dataset('yolo_test')\n",
|
|
"fo.list_datasets()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "9eb7bb84",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Perform detections <a name=\"yoloperformdetections\"></a>\n",
|
|
"\n",
|
|
"We can proceed as before by calling the model and saving the detections to the `predictions` field of each sample. Note that line 7 does not call `detect()` but `detect_yolo_only()`. The detections on all 9148 images take around 1h on a GTX 750Ti."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 28,
|
|
"id": "030e9c7c",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
" 100% |███████████████| 9184/9184 [1.5h elapsed, 0s remaining, 1.6 samples/s] \n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Do detections with model and save bounding boxes\n",
|
|
"yolo_view = yolo_test.view()\n",
|
|
"with fo.ProgressBar() as pb:\n",
|
|
" for sample in pb(yolo_view):\n",
|
|
" image = Image.open(sample.filepath)\n",
|
|
" w, h = image.size\n",
|
|
" pred = detect_yolo_only(sample.filepath, '../weights/yolo-final.onnx')\n",
|
|
"\n",
|
|
" detections = []\n",
|
|
" for _, row in pred.iterrows():\n",
|
|
" xmin, xmax = int(row['xmin']), int(row['xmax'])\n",
|
|
" ymin, ymax = int(row['ymin']), int(row['ymax'])\n",
|
|
" rel_box = [\n",
|
|
" xmin / w, ymin / h, (xmax - xmin) / w, (ymax - ymin) / h\n",
|
|
" ]\n",
|
|
" detections.append(\n",
|
|
" fo.Detection(label='Plant',\n",
|
|
" bounding_box=rel_box,\n",
|
|
" confidence=float(row['box_conf'])))\n",
|
|
"\n",
|
|
" sample[\"predictions\"] = fo.Detections(detections=detections)\n",
|
|
" sample.save()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "24df56d9",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Evaluate detections against ground truth <a name=\"yolodetectionseval\"></a>\n",
|
|
"\n",
|
|
"Having saved the predictions, we can now evaluate them by cross-checking with the ground truth labels. If we specify an `eval_key`, true positives, false positives and false negatives will be saved under that key."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"id": "4aaa4577",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Evaluating detections...\n",
|
|
" 100% |███████████████| 9184/9184 [23.3s elapsed, 0s remaining, 363.8 samples/s] \n",
|
|
"Performing IoU sweep...\n",
|
|
" 100% |███████████████| 9184/9184 [25.3s elapsed, 0s remaining, 333.0 samples/s] \n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"results = yolo_test.evaluate_detections(\"predictions\", gt_field=\"ground_truth\", eval_key=\"eval\", compute_mAP=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "b0df052d",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Calculate results and plot them <a name=\"yoloshowresults\"></a>\n",
|
|
"\n",
|
|
"Now we have the performance of the model saved in the `results` variable and can extract various metrics from that. Here we print a simple report of all classes and their precision and recall values as well as the mAP with the metric employed by [COCO](https://cocodataset.org/#detection-eval). Next, a confusion matrix is plotted for each class (in our case only one). Finally, we can show the precision vs. recall curve for a specified threshold value."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"id": "59355da5",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"from helpers import set_size\n",
|
|
"import matplotlib.pyplot as plt\n",
|
|
"import seaborn as sns\n",
|
|
"import pandas as pd"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"id": "0c8a3151",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"# Style the plots\n",
|
|
"width = 418\n",
|
|
"sns.set_theme(style='whitegrid',\n",
|
|
" rc={'text.usetex': True, 'font.family': 'serif', 'axes.labelsize': 10,\n",
|
|
" 'font.size': 10, 'legend.fontsize': 8,\n",
|
|
" 'xtick.labelsize': 8, 'ytick.labelsize': 8})"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "5baf367a",
|
|
"metadata": {},
|
|
"source": [
|
|
"The code for the LaTeX table of the classification report can be printed by first converting the results to a pandas DataFrame and then calling the `to_latex()` method of the DataFrame. This code can then be inserted into the LaTeX document."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"id": "f4ede94a",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"\\begin{tabular}{lrrrr}\n",
|
|
"\\toprule\n",
|
|
"{} & precision & recall & f1-score & support \\\\\n",
|
|
"\\midrule\n",
|
|
"Plant & 0.633358 & 0.702811 & 0.666279 & 12238.0 \\\\\n",
|
|
"micro avg & 0.633358 & 0.702811 & 0.666279 & 12238.0 \\\\\n",
|
|
"macro avg & 0.633358 & 0.702811 & 0.666279 & 12238.0 \\\\\n",
|
|
"weighted avg & 0.633358 & 0.702811 & 0.666279 & 12238.0 \\\\\n",
|
|
"\\bottomrule\n",
|
|
"\\end{tabular}\n",
|
|
"\n",
|
|
" precision recall f1-score support\n",
|
|
"\n",
|
|
" Plant 0.63 0.70 0.67 12238\n",
|
|
"\n",
|
|
" micro avg 0.63 0.70 0.67 12238\n",
|
|
" macro avg 0.63 0.70 0.67 12238\n",
|
|
"weighted avg 0.63 0.70 0.67 12238\n",
|
|
"\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"results_df = pd.DataFrame(results.report()).transpose()\n",
|
|
"\n",
|
|
"# Results for hyper-optimized final YOLO model\n",
|
|
"\n",
|
|
"# Export DataFrame to LaTeX tabular environment\n",
|
|
"print(results_df.to_latex())\n",
|
|
"\n",
|
|
"# Print classification report\n",
|
|
"results.print_report()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"id": "0c3c446e",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
" precision recall f1-score support\n",
|
|
"\n",
|
|
" Plant 0.55 0.74 0.63 12238\n",
|
|
"\n",
|
|
" micro avg 0.55 0.74 0.63 12238\n",
|
|
" macro avg 0.55 0.74 0.63 12238\n",
|
|
"weighted avg 0.55 0.74 0.63 12238\n",
|
|
"\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"results_df = pd.DataFrame(results.report()).transpose()\n",
|
|
"\n",
|
|
"# Results for original YOLO model\n",
|
|
"\n",
|
|
"# Export DataFrame to LaTeX tabular environment\n",
|
|
"# print(results_df.to_latex())\n",
|
|
"\n",
|
|
"# Print classification report\n",
|
|
"results.print_report()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"id": "ea4985d4",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
" precision recall f1-score support\n",
|
|
"\n",
|
|
" Plant 0.52 0.54 0.53 22535\n",
|
|
"\n",
|
|
" micro avg 0.52 0.54 0.53 22535\n",
|
|
" macro avg 0.52 0.54 0.53 22535\n",
|
|
"weighted avg 0.52 0.54 0.53 22535\n",
|
|
"\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"results_df = pd.DataFrame(results.report()).transpose()\n",
|
|
"\n",
|
|
"# Export DataFrame to LaTeX tabular environment\n",
|
|
"# print(results_df.to_latex())\n",
|
|
"\n",
|
|
"# Print classification report\n",
|
|
"results.print_report()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"id": "a6e0e146",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"0.5545944356667605\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# Result of final optimized YOLO model\n",
|
|
"print(results.mAP())"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"id": "98122829",
|
|
"metadata": {},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Ignoring unsupported argument `thresholds` for the 'matplotlib' backend\n",
|
|
"Ignoring unsupported argument `thresholds` for the 'matplotlib' backend\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAjgAAACoCAYAAADtjJScAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8qNh9FAAAACXBIWXMAAA9hAAAPYQGoP6dpAAAcBElEQVR4nO3dYWwT5/0H8K//NB1B9dlIWaUJn8WqGJQ4qaAb23JIezGhyUmlITw1RhOaSCnkzQSTRqS9WDMR9mrJtNJXBY/1zaTGRbXUvSCGpS97kaAbY46JStEWxZ7oJqbmzigBgnr/F8y3OLHj8935fD5/PxJq7vL47lfD/fS75557Hp+maRqIiIiIPOT/mh0AERERkd1Y4BAREZHnsMAhIiIiz2GBQ0RERJ7DAoeIiIg8hwUOEREReQ4LHCIiIvIcFjhERETkOSxwiIiIyHOec/qEqqoilUoBAE6ePFmxTSaTAQAoigJRFCFJkmPxEVHrY54hIsd7cGRZxvLyctXf5/N5yLKMWCyGRCKBZDLpXHBE5AnMM0TkeA9OLBaDoihQVbXi72VZht/v17f9fj9kWTZ1d3Xr1i1omoaOjg7T8RKRcWtra/D5fNi/f39T42CeIfKmenKM4wVOLUtLSwgGg/p2MBismqRq0TRN/1Or3dOnT/Hcc8/B5/OZOlcjMT5r3B4f4P4YjcbXKmv3NiLPPHnyxKboiMgOritwKlEUxdTnOjo68OTJE6ytrRlq//TpU1PncQrjs8bt8QHuj9FIfK3ak2Elz2iahu7u7i3bra6uYnFxEbt370ZnZ6epczUS47OG8VljNL579+4Zvgl0XYETDofL7qSWl5chiqLp43V0dDDxNBjjs87tMdaTfFqB3XnG5/Nhx44dhtp2dnYabtsMjM8axmdNrfjq6eF2TYGjqioEQYAkSZicnNT3FwoFS283MPE4h/FZ5/YY7Uw+zdCoPENE7uN4gSPLMj7++GMUi0WIoohYLAYAiMfjSKfTEEURQ0NDyGQyUBQFp06dciSuQqGAR48eufru2Q3x+f1+RCKRpsZAVIvb88yLL77I64iowRwvcCRJqninNDs7q/9cSkZOefDgAeLxOL788ktHz9uqPvzwQ4RCIX27UgHGQoiaqRXyzN27d3mNEDWQax5RNVNXVxfS6TS6urqa3kNSiVvGZxQKBRw+fBiHDx821H5jIQSw8KH2Vcoz//nPf3DixAkUi8Vmh0TkaSxw/isUCqGnp8eV4x9WVlawffv2psf3yiuv4O7du5sS88YCrFYhtL7wYcFD7SQUCqGrq6vZYRC1BRY4VJdKxcjGAqxaIVSt8GHBQ+1oYWHB9Gd5nRDVxgKHGqJS8t1Y+GxV8PT09DCBkyeVZlA+duyYpeNUegRshZWXGVhwkRuxwCFHrU+CtQoe9uyQF3V3d1fs4TSq3rFwTuGYO3IbFjjUVJUKnoWFBfbskKdZ+Tdc7RGwVWZfZuCYu+Y7ffo03n777bJ9uVwOFy9eRKFQQCKRwJMnT3D79m0MDQ3he9/7nm3nzmQyAJ7NBC6K4pbzSWUyGX0eqo2f/epXv4pAIGBbXAALHHKZSCSCSCTCnh2iLTTi37zZlxk45q65MpkM5ubmkM/ny2bjjkajGBoagizLSCQSWFlZQW9vL370ox9hdnbW0szdJfl8HrIsY2JiAgAwMjJStcBRVRWXLl3S55za+Nkf//jHOHPmjOWY1mOBQ67Enh2i1tGOY+7+/ve/Y3l5Wd+2a0LWYDCIl156yXB7RVEwPDyM6elpjI2N1Wzv9/tNLyy7kSzL+piy0rFlWa5Y5MzMzGBwcHDLz2azWfT09NgSG8ACh1qE0Z6dVkyURF5V75i7Vrl+Hzx4gEgk0pDJYbdt24bPP//c0HQCqqrqj4Xi8XjNAuejjz7Ct7/9bUSjUVtiXVpaQjAY1LeDwWDF4imXy0GSJP2RVKXPBgIBrKys2BJXCQscailGenZaKVEStROvXL9dXV347LPPKvbgWJ2QNRgMGp4rSZZlfUZuURQr9p7Mz88jk8ng8ePH6Ovrw09+8pNNx1FVFRcvXqx6nqNHjxp+pKUoyqZ9+Xze0MzhDx8+NHQOo1jgUEtb37NTLVHu2rWryVESUSVGrl+3FjobHyM1Y0LWbDar/9zX14fp6elNBU4oFEIsFsPKykrVuZcEQTD0eGujcDhc1mOzvLy8qRBKJpMQRRGZTAbZbFYfK7Txs4qiYM+ePXXHsBUWOOQJWyXK27dvNzk6ItoKb1Tql8vlynpWYrEYDhw4YOpYZntwJEnC5OSkvl0oFPQCS1VVCIKAkydP6r/PZrPo7+9HNBqFIAhln/3nP/+J119/3VT81bDAIU9Znyhv3LiBY8eO4ZNPPsHOnTttHbxGRPbjjYoxsixjamoKiUQCiUQCwLOCBwDGx8f1ouLq1asoFAqQZRn79u2rejyzPTiiKGJoaAiZTAaKouhvSAFAPB5HOp2GIAh6zKW3vaLR6KbPjoyM1H3+WljgkCet79Y+ceIEAOD999/Hvn37XNvlTUTPVLpRKRaL2L59e7NDcwVJkpBOp8v2RaNR3Lx5s2zf+rlx7B7AW1JtbM3s7GzZdqWY1392q0doZv2frUcjcpFSgnz//fcBAMPDw9izZw/++Mc/4rPPPmtydERUSyQS0XteP/30UywtLTU5Imol7MEhT4tEIti1axfS6TSePn2K4eFhvcv77t277M0hcrnSXCmlntjbt2/j5ZdfbmZI1CLYg0NtIRwO49VXX8Xdu3fxhz/8AQBw48YN9uQQuVypJ/by5csAgE8++YTXLRnCHhxqK+t7bEqrObv9dVSidheJRLC6ugrgfz057IGlWtiDQ22ndEf44YcfAgAOHz6MPXv28K6QyMW6u7uRTqf1nhy7Fxsl72GBQ20pEongBz/4AR9ZEbWQcDiMvXv3AgAWFhZ4vdKWbC1wCoWCnYcjarhIJIJvfetbAJ49smJPjrsxx1Bp0DGvV6rF0hichYWFsrU4UqkU3nrrLYshETlr43wbN27c0PdTczHH0Ebd3d2b5schqsR0gXPmzBkUi8Wy5c7tnqSHyCmVBh9zEGNzMcdQNZFIRC9sFhYW4Pf7ea3SJqYLnIMHD2J4eLhs37Vr1ywHRNQs7MlxF+YY2sr6R1UAb0hoM9NjcCotvBUOhy0FQ9RslcbkcObj5mCOoa2UbkhKLwnwURVtZLoHJ5/PI5VKob+/HwCgaRpmZmbwwQcf2BYcUTNUW+yPd4jOYo6hWtY/qiLayHQPzvT0NEKhEDRNg6ZpAKD/l6jV8TXy5mOOoXrwtXHayHQPztjYGAYGBsr2SZJkOSAiN+Hg4+ZhjiEjOBaHqjHdgzMwMICHDx/iypUruHLlCh4+fIje3l47YyNyhY3P+tmT4wzmGDKCY3GoGktjcM6cOaMPBEwmk7hw4YK+tD2Rl1RbwyoUCvEV1QZhjiGjOBaHKjFd4Fy/fh3pdLps329+8xsmH/KsaoOPAXaLNwJzDBFZYbrACYVCm/b19fVZCobI7SKRiF7oFItFLCwscDbVBmGOITM48R+VWHpEtRHXiaF2weTZeMwxVA8ONqaNTBc4kiTh9ddfRzQaBQDIsoyzZ8/aFhgRtTfmGKrHxpnI2atKpt+i6u3txblz5/Q5Ks6fP7/plU4iIrOYY6hekUiEY7RIZ2k1cVEUy+6oCoVCxefmRERmMMcQkVmGC5zr169DkiS88MILAIArV66U/V5VVciyjMuXL9sbIRG1BeYYIrKT4UdU77zzDrLZrL793nvvQVEU/Y+mafjiiy8aEiQReR9zDNmJSzeQ4R6cjfNR/OpXv9o0qyinUScis5hjyA58m4pKTA8yvnr1qj59+okTJ/DTn/6Ur3ASkW2YY8gMLt1AJaYLnP7+frz22muYnp5GT08P3nrrLSwvL9sYGhG1M6s5JpPJIJPJIJVKQZblim1Onz6NXC6HXC6HyclJmyKnZuPbVARYKHAEQQAAzMzM4NVXXwUABAIBQ59l4iGiWqzkmHw+D1mWEYvFkEgkkEwmK7YrFAo4fvw4pqamMDo6ak/gROQKlmcyzufz6OnpQT6fh6qqhj4nyzImJiYAACMjIxWfq5cST19fHy5cuGA2TCJqUWZzDPBsUsDSWAzg2bgMWZY35ZpTp04hFovZFzQRuYbpAmdwcBCpVAoffPABisUiUqkUdu7cWfNzTiceTdOwsrKyZZvV1dWy/7oN47OmkfGtP3atf2dGj+NGRuPTNA0+n8+Wc5rNMQCwtLSEYDCobweDwYrFUemtLUVRAACJRMJUrMwzjVdvfHZdm0Z57ftzWiNyjOkCx+/344033tC3z549a2gAoNOJZ21tDQsLC4baLi4umjqHUxifNY2Ir3TMxcVFbN++3bbjuZWR+J5//nlbzmU2x1RTyiXrjY2N6T8fOnQIg4OD+qOxejDPOMdofHZfm0Z55ftrFjtzjCsm+mtk4uno6EB3d/eWbVZXV7G4uIjdu3ejs7Oz7nM0GuOzppHxPXr0CACwe/duS4MavfId3rt3z/Q57Mwx4XC47MZpeXkZoiiWtclkMshms3quEQQB+XxeX/uqHswzjVdvfHZdm0Z57ftzWiNyjOEC55133oHf79fXgnnvvfcwNDRU1sbIJFxOJx6fz4cdO3YYatvZ2Wm4bTMwPmsaEV/pQrTr2K3+HVp5PGVXjgGezZez/uWEQqGgPwZXVRWCIEAUxbKbJlVVTeUYgHnGSUbjK12bi4uLePHFFx2bC8cr31+z2JljHJ/oz+nEQ0Stwc6J/kRRxNDQEDKZDBRFwalTp/TfxeNxpNNpRKNR/Y3ObDaLd9991/r/BLkGJ/wj02NwRFHE5cuXkUgk8MILL2Bubg79/f2GPsfEQ0S1mM0xJdVeUpidnd3Uhm9SeU9pwr8bN27g2LFjnPCvDZkucGZmZsq6iwcGBnD9+nV8//vfr/lZJh4iqsVKjiECnhU5LGzal+kCJxgMYnh42M5YiIh0zDFEZIXpmYz/9re/4eHDh2X71q8ETERkBXMMEVlhugcnkUjgyJEjCIfD8Pv9uHPnDs6dO2dnbETUxphjiMgKS4OM0+k0ZmZmoKoqfvazn2163ZuIyCzmGCKywvQjKgD6YplvvPEGCoXCpu5kIiIrmGOIyCzTBc7U1BQEQdDnpRgYGKi6MjgRUb2YY4jICtMFTn9/P4aHh9llTEQNwRxDRFaYLnAqLXrHNxyIyC7MMURkhelBxr29vYjH49i5cydkWYYsyzh79qydsRFRG2OOISIrTPfgDAwM4MKFC+jp6YGmaTh//ry+SB4RkVXMMURkhekenB/+8IcYHR3lHRURNQRzDBFZYboHJ5FIbFoTZm5uznJAREQAcwwRWWO6B8fn8+GXv/wlwuEwRFGEoijIZDLsQiYiWzDHEJEVpgucS5cuYWBgAF988YW+4u/y8rJdcRFRm2OOISIrTBc4ExMTm+6k2H1MRHZhjiEiK+oqcBYWFnD16lWEw2G89tprm37PrmMisoI5hojsYrjAmZubw8jIiP4sXJZl/Pa3v21kbETURphjiMhOht+iSqVSuHnzJv70pz/hxo0b2LVrV8WZRomIzGCOISI7GS5wQqEQ/H6/vj06Ooo7d+40JCgiaj/MMURkJ8MFTjgcLtv2+/3QNK1s38LCgj1REVHbYY4hIjsZLnDy+TwePnxY9qdQKJT9PD093chYicjDmGOIyE6GBxknk0n87ne/K9unaRqmpqb0n30+H86dO2dvhETUFphjiMhOhguc4eFhjI2NVf29pmm4dOmSLUERUfthjiEiOxkucI4ePVo2ALCSoaEhywERUXtijiEiOxkeg9Pb22tLGyKiSphjiMhOplcTJyIiInIrFjhERETkOSxwiIiIyHNY4BAREZHn1LWaODnj9OnTePvtt8v2/eMf/8Dvf/973L9/H4lEAgCwtLSEgwcPQpIk286dyWQAAIqiQBTFqsdOJpMQRREA8N3vfnfTMQRBsDUuIiKierDAcZlMJoO5uTnk83m9gACAr3/96+jo6MCf//xnvcABgL1792J2drasrVn5fB6yLGNiYgIAMDIyUrFIGRkZwYULFyAIAuLxeFmBo6oqLl26hFOnTlmOh4iIyCwWOP9VKBTw6NEjdHZ22nrcYDCIl156yXB7RVEwPDyM6enpLSc9KxEEAaqqWglRJ8ty2Twkfr8fsiyXFTm5XE5vk8vlkE6nsbKyov9+ZmYGg4ODtsRDRERkFgscAA8ePEA8HseXX35p+7G3bduGzz//HF1dXTXbqqqqPxaKx+M1C5xUKoWBgQFEo1FbYl1aWkIwGNS3g8HgpuJpfn4ehUIB+XweADA+Po6f//znAJ4thChJkv6Yi4iIqFlY4ADo6upCOp1GV1dXQ3pwjBQ3wLMelFgsBgAQRXFT7wnwrMAoFRCSJJU9ripRVRUXL16sep6jR48afqSlKMqmYwcCAb2omp+f11d4LhQK+MY3vmHouERERI3EAue/QqEQenp6sGPHjqbFkM1m9Z/7+vowPT29qcAJhUJ6EVSNIAiGHm9tFA6Hy3pslpeXNxVCoiiW7QsEAigUCrh16xb279+PTCaDbDarjyGyq3eJiIioHixwXCKXy5X1rMRiMRw4cMDUscz24EiShMnJSX27UCjoBZaqqvqbUalUSm+Tz+fxne98p6xAzGaz6O/vZ3FDRERNwwLHBWRZxtTUFBKJhP7IKZfLAXg2xuXYsWP417/+hevXr+P+/fsVH12tZ7YHRxRFDA0NIZPJQFGUsjeh4vE40uk0BEFAIpFAKpWCqqo4e/Zs2cBkWZb1t8Ci0agtb3cRERHViwWOC0iShHQ6XbYvGo3i5s2bAICVlRUUi0VMTk42/BFatcdfs7OzVdusf4uq0v8LERGR0ziTMREREXkOCxwiIiLynKY8ojKyHIDRJQOIiCphniFqb4734JSWA4jFYkgkEkgmk6baEBFVwzxDRI734BhZDsBIG6M0TSsbBFvJ6upq2X/dhvFZ08j4Ssf861//aun4jx8/xv3796EoCr7yla/YFZ5tHj9+jGKxiN27d2/ZTtM0+Hw+Z4LaAvNM/bwan13XaC2tcA27PT67c4zjBY6R5QCMtDFqbW1Nn2m3lsXFRVPncArjs6YR8f373/8GAJw4ccL2Y7vNtm3bcO3atbJrs5Lnn3/emYC2wDxjntfia6drtNXZnWNc8Zr4xuUAzLappKOjA93d3Vu2WV1dxeLiInbv3m37Ug12YHzWNDK+np4e3L59G8Vi0dJxSndXX/va11x9d7Vv374tv8N79+45GFV9mGe25tX47LpGa2mFa9jt8dmdYxwvcIwsB2CkjVE+n8/w3DGdnZ1NXaqhFsZnTaPie/nlly0fY2VlBQsLC01fLqSaUny1vkM3PJ4CmGes8GJ8dlyjtbTKNez2+OzMMY4PMpYkqWzNpY3LAdRqQ0RUC/MMEfk0TdOcPun6VzMDgYA+M+6hQ4f05QCqtanHX/7yF2iaVvN5naZpWFtbQ0dHh2vuQNdjfNa4PT7A/TEaje/Jkyfw+Xx45ZVXHIyuMuaZ+jA+axifNY3IMU0pcJxy69YtaJqGjo6OZodC1BbW1tbg8/mwf//+ZofiGOYZIufUk2M8XeAQERFRe+JSDUREROQ5LHCIiIjIc1jgEBERkeewwCEiIiLPYYFDREREnsMCh4iIiDyHBQ4RERF5DgscIiIi8hwWOEREROQ5LHCIiIjIc1jgEBERkeewwCEiIiLPea7ZATgtk8kAABRFgSiKkCTJVJtmx6coCnK5HGKxmOviW99WEARXxpdMJiGKIgAgFou5Kr5SmxKn4lNVFalUCgBw8uTJim2aeW20CuYYZ2Jc35Z5pv742iLPaG1kaWlJe/PNN/Xt48ePm2rTKEbOPT8/r83MzGiapmmKomjf/OY3XRVfiaIo2pEjR/RYnWA0vuPHj2uKomiapmlHjhxxJDZNMxafoijapUuX9O317RttZmZG+/Wvf112/vWaeW20CuYY65hnrGGe+Z+2ekQlyzL8fr++7ff7Icty3W2aGZ+iKPo+QRAQCASQy+VcE1/JzMwMBgcHHYmrxEh8uVxOb5PL5ZBOp10VnyAISKVS+t/p+vaNFovFEA6Hq/6+mddGq2COcSbGEuYZc/G1S55pqwJnaWkJwWBQ3w4Gg1BVte42zYxPkiRMTEzo24qiIBqNuiY+4NkF3YxHF0bim5+fR6FQQD6fBwCMj4+7Kj4AOHv2LOLxOOLxOEZHRx2Lr5ZmXhutgjnGOuaZxscHtEeeaasCpxJFUWxp0yhbnXt8fBznz593MJrNKsWXz+f1587NtjE+VVURCAQQjUYRjUYxPz/v6N3pRpW+v2w2i3Q6jUAggOPHjzsfVB2aeW20CuYY65hnrGnXPNNWBc7GbrHl5eVNF4iRNo1Sz7kzmQwkSXJ04JqR+JLJpB5fNpuFLMuOXdhG4hNFsWxfIBDQ77LcEF8mk8HBgwcRjUbx7rvvoq+vzzWPgZp5bbQK5hjrmGcaH1+75Jm2KnAkSUI2m9W3C4WC3sVZ6gLbqo0b4gOePaMUBAGxWAy5XM6xC8dIfCdPnkQsFkMsFtNHvzvVvW3073f995XP513196soCgKBQNln1m83gxuujVbBHONMjMwz1uJrlzzj0zRNsyW6FrH+9bNAIKDfnRw6dAjpdBqCIFRt44b4FEVBPB7X26uqik8//dQ18QmCAOBZgpyamkIoFMLY2Jhjd6hG/34VRYGqqhBF0VV/v4IgIJlM6t+jk//+ZFnG9PQ0isUiEomE666NVsEc0/gYmWesx9cOeabtChwiIiLyvrZ6REVERETtgQUOEREReQ4LHCIiIvIcFjhERETkOSxwiIiIyHNY4JCtcrkcxsfHsXfvXkxOTiKZTCKZTGJ8fLxhc2nIsox4PK6vULtxm4i8hXmGjOBr4mQ7VVVx4MAB3Lx5s2y+ijNnzuCjjz7S99mpNKdDIpGouE1E3sI8Q7WwB4ccIUkSVFV1zXTgROQ9zDO0HgscckRpnRgnVyUmovbCPEPrPdfsAMi7SuvZ5HI5LC8vY3Z2tmwq9dICeaIoIpvNYmxsDMCzdVump6fR398PRVEwODioT98tCALy+TyWlpb09kTUvphnqBr24FDDSJKk/5mbmytbzC2fz2NqakpfNC8cDiOZTEJVVYyMjGB0dBSxWAxLS0v6IL4zZ85AFEUkEgkUi0V9vRIial/MM1QNe3Co4aLRKPr6+jA1NYWJiQkAwPT0NAKBQNmz8mw2C0EQIIqiPkBwdHRU/31pMGE+n8fy8rKjKxwTkbsxz9BGLHDIEX6/H9euXSvb19vbC0mS9O1EIoFkMgm/36/vW/8mxMWLFxEMBhGLxRxbNZiIWgfzDK3HR1TkiHA4rN8J5XI5DA0NYW5urqyNLMuIxWK4c+fOpv2yLOPOnTs4efIkRFFEsVjUf1eiqmrZ5zZuE5G3Mc/QepwHh2yVy+Vw9epV5PN59Pf3Q5Ik/Y2G06dPo7+/X583QpZlfPzxx+jv7wfw7Fm6IAgV9wPAL37xCxw9elQ/1/T0NIaGhiCKIt58800AwPnz5wGgbJtvVBB5C/MMGcECh4iIiDyHj6iIiIjIc1jgEBERkeewwCEiIiLPYYFDREREnsMCh4iIiDyHBQ4RERF5DgscIiIi8hwWOEREROQ5LHCIiIjIc1jgEBERkeewwCEiIiLPYYFDREREnvP/963vMpweFMMAAAAASUVORK5CYII=\n",
|
|
"text/plain": [
|
|
"<Figure size 578.387x178.731 with 2 Axes>"
|
|
]
|
|
},
|
|
"metadata": {},
|
|
"output_type": "display_data"
|
|
}
|
|
],
|
|
"source": [
|
|
"fig_save_dir = '../../thesis/graphics/'\n",
|
|
"\n",
|
|
"fig, ax = plt.subplots(1, 2, figsize=set_size(width, subplots=(1,2)))\n",
|
|
"results.plot_pr_curves(iou_thresh=0.5, backend='matplotlib', ax=ax[0], color='black', linewidth=1)\n",
|
|
"results.plot_pr_curves(iou_thresh=0.95, backend='matplotlib', ax=ax[1], color='black', linewidth=1)\n",
|
|
"# Set the labels for the legends manually because\n",
|
|
"# the default ones contain a line for the classes (irrelevant).\n",
|
|
"ax[0].legend(['AP = 0.64'], frameon=False)\n",
|
|
"ax[1].legend(['AP = 0.40'], frameon=False)\n",
|
|
"fig.tight_layout()\n",
|
|
"fig.savefig(fig_save_dir + 'APpt5-pt95-final.pdf', format='pdf', bbox_inches='tight')"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "def95455",
|
|
"metadata": {},
|
|
"source": [
|
|
"### View dataset in fiftyone <a name=\"yolofiftyonesession\"></a>\n",
|
|
"\n",
|
|
"We can launch a fiftyone session in a new tab to explore the dataset and the results."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "75fd090c",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"session = fo.launch_app(yolo_view, auto=False)\n",
|
|
"session.plots.attach(matrix)\n",
|
|
"session.open_tab()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"id": "751f3d2b",
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"session.close()"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.7.15"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
}
|