Make separate notebooks for evaluation
This commit is contained in:
parent
5e78036b8b
commit
2cfe51d478
547
classification/evaluation/eval-test-model.ipynb
Normal file
547
classification/evaluation/eval-test-model.ipynb
Normal file
@ -0,0 +1,547 @@
|
|||||||
|
{
|
||||||
|
"cells": [
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "945c9b80",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"# Table of contents\n",
|
||||||
|
"1. [Introduction](#introduction)\n",
|
||||||
|
"2. [Aggregate Model Evaluation](#modelevaluation)\n",
|
||||||
|
" 1. [Loading the dataset](#modeload)\n",
|
||||||
|
" 2. [Perform detections](#modeldetect)\n",
|
||||||
|
" 3. [Evaluate detections](#modeldetectionseval)\n",
|
||||||
|
" 4. [Calculate results and plot them](#modelshowresults)\n",
|
||||||
|
" 5. [View dataset in fiftyone](#modelfiftyonesession)"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "01339680",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Introduction <a name=\"introduction\"></a>\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook loads the test dataset in YOLOv5 format from disk and evaluates the model's performance."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 1,
|
||||||
|
"id": "ff25695e",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"import fiftyone as fo\n",
|
||||||
|
"from PIL import Image\n",
|
||||||
|
"from detection import detect\n",
|
||||||
|
"from detection import detect_yolo_only"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "86a5e832",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"## Aggregate Model Evaluation <a name=\"modelevaluation\"></a>\n",
|
||||||
|
"\n",
|
||||||
|
"First, load the dataset from the directory containing the images and the labels in YOLOv5 format.\n",
|
||||||
|
"\n",
|
||||||
|
"### Loading the dataset <a name=\"modeload\"></a>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "bea1038e",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"name = \"dataset\"\n",
|
||||||
|
"dataset_dir = \"dataset\"\n",
|
||||||
|
"\n",
|
||||||
|
"# The splits to load\n",
|
||||||
|
"splits = [\"val\"]\n",
|
||||||
|
"\n",
|
||||||
|
"# Load the dataset, using tags to mark the samples in each split\n",
|
||||||
|
"dataset = fo.Dataset(name)\n",
|
||||||
|
"for split in splits:\n",
|
||||||
|
" dataset.add_dir(\n",
|
||||||
|
" dataset_dir=dataset_dir,\n",
|
||||||
|
" dataset_type=fo.types.YOLOv5Dataset,\n",
|
||||||
|
" split=split,\n",
|
||||||
|
" tags=split,\n",
|
||||||
|
" )\n",
|
||||||
|
"\n",
|
||||||
|
"dataset.persistent = True\n",
|
||||||
|
"classes = dataset.default_classes"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "361eeecd",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"If the dataset already exists because it had been saved under the same name before, load the dataset from fiftyone's folder."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 3,
|
||||||
|
"id": "2d479be8",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"dataset = fo.load_dataset('dataset')\n",
|
||||||
|
"classes = dataset.default_classes"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "4485dce3",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Perform detections <a name=\"modeldetect\"></a>\n",
|
||||||
|
"\n",
|
||||||
|
"Now we can call the aggregate model to do detections on the images contained in the dataset. The actual detection happens at line 6 where `detect()` is called. This function currently does inference using the GPU via `onnxruntime-gpu`. All detections are saved to the `predictions` keyword of each sample. A sample is one image with potentially multiple detections.\n",
|
||||||
|
"\n",
|
||||||
|
"> **_NOTE:_** If the dataset already existed beforehand (you used `load_dataset()`), the detections are likely already saved in the dataset and you can skip the next step."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 4,
|
||||||
|
"id": "63f675ab",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
" 100% |█████████████████| 640/640 [6.2m elapsed, 0s remaining, 2.1 samples/s] \n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"# Do detections with model and save bounding boxes\n",
|
||||||
|
"with fo.ProgressBar() as pb:\n",
|
||||||
|
" for sample in pb(dataset.view()):\n",
|
||||||
|
" image = Image.open(sample.filepath)\n",
|
||||||
|
" w, h = image.size\n",
|
||||||
|
" pred = detect(sample.filepath, '../weights/yolo.onnx', '../weights/resnet.onnx')\n",
|
||||||
|
"\n",
|
||||||
|
" detections = []\n",
|
||||||
|
" for _, row in pred.iterrows():\n",
|
||||||
|
" xmin, xmax = int(row['xmin']), int(row['xmax'])\n",
|
||||||
|
" ymin, ymax = int(row['ymin']), int(row['ymax'])\n",
|
||||||
|
" rel_box = [\n",
|
||||||
|
" xmin / w, ymin / h, (xmax - xmin) / w, (ymax - ymin) / h\n",
|
||||||
|
" ]\n",
|
||||||
|
" detections.append(\n",
|
||||||
|
" fo.Detection(label=classes[int(row['cls'])],\n",
|
||||||
|
" bounding_box=rel_box,\n",
|
||||||
|
" confidence=int(row['cls_conf'])))\n",
|
||||||
|
"\n",
|
||||||
|
" sample[\"predictions\"] = fo.Detections(detections=detections)\n",
|
||||||
|
" sample.save()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "10d94167",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Evaluate detections against ground truth <a name=\"modeldetectionseval\"></a>\n",
|
||||||
|
"\n",
|
||||||
|
"Having saved the predictions, we can now evaluate them by cross-checking with the ground truth labels. If we specify an `eval_key`, true positives, false positives and false negatives will be saved under that key."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 5,
|
||||||
|
"id": "68cfdad2",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"Evaluating detections...\n",
|
||||||
|
" 100% |█████████████████| 640/640 [2.1s elapsed, 0s remaining, 305.3 samples/s] \n",
|
||||||
|
"Performing IoU sweep...\n",
|
||||||
|
" 100% |█████████████████| 640/640 [2.3s elapsed, 0s remaining, 274.2 samples/s] \n"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"results = dataset.view().evaluate_detections(\n",
|
||||||
|
" \"predictions\",\n",
|
||||||
|
" gt_field=\"ground_truth\",\n",
|
||||||
|
" eval_key=\"eval\",\n",
|
||||||
|
" compute_mAP=True,\n",
|
||||||
|
")"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "94b9751f",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### Calculate results and plot them <a name=\"modelshowresults\"></a>\n",
|
||||||
|
"\n",
|
||||||
|
"Now we have the performance of the model saved in the `results` variable and can extract various metrics from that. Here we print a simple report of all classes and their precision and recall values as well as the mAP with the metric employed by [COCO](https://cocodataset.org/#detection-eval). Next, a confusion matrix is plotted for each class (in our case only one). Finally, we can show the precision vs. recall curve for a specified threshold value."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 6,
|
||||||
|
"id": "24df35b4",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
" precision recall f1-score support\n",
|
||||||
|
"\n",
|
||||||
|
" Healthy 0.82 0.74 0.78 662\n",
|
||||||
|
" Stressed 0.71 0.78 0.74 488\n",
|
||||||
|
"\n",
|
||||||
|
" micro avg 0.77 0.76 0.76 1150\n",
|
||||||
|
" macro avg 0.77 0.76 0.76 1150\n",
|
||||||
|
"weighted avg 0.77 0.76 0.77 1150\n",
|
||||||
|
"\n",
|
||||||
|
"0.6225848121901868\n"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": []
|
||||||
|
},
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "display_data"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": []
|
||||||
|
},
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "display_data"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"application/vnd.jupyter.widget-view+json": {
|
||||||
|
"model_id": "367567b8132d4b3b9ad2566a958a6bbd",
|
||||||
|
"version_major": 2,
|
||||||
|
"version_minor": 0
|
||||||
|
},
|
||||||
|
"text/plain": [
|
||||||
|
"FigureWidget({\n",
|
||||||
|
" 'data': [{'mode': 'markers',\n",
|
||||||
|
" 'opacity': 0.1,\n",
|
||||||
|
" 'type': 'scatter',\n",
|
||||||
|
" 'uid': '79cc4c3d-21f9-416c-af39-7323e0a9570d',\n",
|
||||||
|
" 'x': array([0, 1, 2, 0, 1, 2, 0, 1, 2]),\n",
|
||||||
|
" 'y': array([0, 0, 0, 1, 1, 1, 2, 2, 2])},\n",
|
||||||
|
" {'colorscale': [[0.0, 'rgb(255,245,235)'], [0.125,\n",
|
||||||
|
" 'rgb(254,230,206)'], [0.25, 'rgb(253,208,162)'],\n",
|
||||||
|
" [0.375, 'rgb(253,174,107)'], [0.5, 'rgb(253,141,60)'],\n",
|
||||||
|
" [0.625, 'rgb(241,105,19)'], [0.75, 'rgb(217,72,1)'],\n",
|
||||||
|
" [0.875, 'rgb(166,54,3)'], [1.0, 'rgb(127,39,4)']],\n",
|
||||||
|
" 'hoverinfo': 'skip',\n",
|
||||||
|
" 'showscale': False,\n",
|
||||||
|
" 'type': 'heatmap',\n",
|
||||||
|
" 'uid': '266874fc-ec69-4c00-8a44-5305a17c5d3b',\n",
|
||||||
|
" 'z': array([[105, 158, 0],\n",
|
||||||
|
" [ 0, 382, 106],\n",
|
||||||
|
" [493, 0, 169]]),\n",
|
||||||
|
" 'zmax': 493,\n",
|
||||||
|
" 'zmin': 0},\n",
|
||||||
|
" {'colorbar': {'len': 1, 'lenmode': 'fraction'},\n",
|
||||||
|
" 'colorscale': [[0.0, 'rgb(255,245,235)'], [0.125,\n",
|
||||||
|
" 'rgb(254,230,206)'], [0.25, 'rgb(253,208,162)'],\n",
|
||||||
|
" [0.375, 'rgb(253,174,107)'], [0.5, 'rgb(253,141,60)'],\n",
|
||||||
|
" [0.625, 'rgb(241,105,19)'], [0.75, 'rgb(217,72,1)'],\n",
|
||||||
|
" [0.875, 'rgb(166,54,3)'], [1.0, 'rgb(127,39,4)']],\n",
|
||||||
|
" 'hovertemplate': '<b>count: %{z}</b><br>truth: %{y}<br>predicted: %{x}<extra></extra>',\n",
|
||||||
|
" 'opacity': 0.25,\n",
|
||||||
|
" 'type': 'heatmap',\n",
|
||||||
|
" 'uid': '198d0f5a-0e64-45b2-b138-92efdc6c7232',\n",
|
||||||
|
" 'z': array([[105, 158, 0],\n",
|
||||||
|
" [ 0, 382, 106],\n",
|
||||||
|
" [493, 0, 169]]),\n",
|
||||||
|
" 'zmax': 493,\n",
|
||||||
|
" 'zmin': 0}],\n",
|
||||||
|
" 'layout': {'clickmode': 'event',\n",
|
||||||
|
" 'margin': {'b': 0, 'l': 0, 'r': 0, 't': 30},\n",
|
||||||
|
" 'template': '...',\n",
|
||||||
|
" 'title': {},\n",
|
||||||
|
" 'xaxis': {'constrain': 'domain',\n",
|
||||||
|
" 'range': [-0.5, 2.5],\n",
|
||||||
|
" 'tickmode': 'array',\n",
|
||||||
|
" 'ticktext': [Healthy, Stressed, (none)],\n",
|
||||||
|
" 'tickvals': array([0, 1, 2])},\n",
|
||||||
|
" 'yaxis': {'constrain': 'domain',\n",
|
||||||
|
" 'range': [-0.5, 2.5],\n",
|
||||||
|
" 'scaleanchor': 'x',\n",
|
||||||
|
" 'scaleratio': 1,\n",
|
||||||
|
" 'tickmode': 'array',\n",
|
||||||
|
" 'ticktext': array(['(none)', 'Stressed', 'Healthy'], dtype=object),\n",
|
||||||
|
" 'tickvals': array([0, 1, 2])}}\n",
|
||||||
|
"})"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "display_data"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": []
|
||||||
|
},
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "display_data"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"text/plain": []
|
||||||
|
},
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "display_data"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"application/vnd.jupyter.widget-view+json": {
|
||||||
|
"model_id": "2141a71d33524ec8a5ff3b3cf0706b72",
|
||||||
|
"version_major": 2,
|
||||||
|
"version_minor": 0
|
||||||
|
},
|
||||||
|
"text/plain": [
|
||||||
|
"FigureWidget({\n",
|
||||||
|
" 'data': [{'customdata': array([99., 99., 99., 99., 99., 99., 99., 99., 99., 99., 99., 98., 98., 98.,\n",
|
||||||
|
" 97., 97., 97., 97., 96., 96., 96., 95., 94., 93., 93., 92., 91., 91.,\n",
|
||||||
|
" 90., 89., 88., 87., 87., 86., 86., 85., 85., 84., 82., 82., 81., 80.,\n",
|
||||||
|
" 80., 79., 78., 78., 77., 76., 75., 74., 72., 70., 70., 68., 67., 66.,\n",
|
||||||
|
" 65., 64., 62., 62., 61., 59., 58., 56., 53., 52., 51., 50., 0., 0.,\n",
|
||||||
|
" 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
|
||||||
|
" 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
|
||||||
|
" 0., 0., 0.]),\n",
|
||||||
|
" 'hovertemplate': ('<b>class: %{text}</b><br>recal' ... 'customdata:.3f}<extra></extra>'),\n",
|
||||||
|
" 'line': {'color': '#3366CC'},\n",
|
||||||
|
" 'mode': 'lines',\n",
|
||||||
|
" 'name': 'Healthy (AP = 0.562)',\n",
|
||||||
|
" 'text': array(['Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy',\n",
|
||||||
|
" 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy',\n",
|
||||||
|
" 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy',\n",
|
||||||
|
" 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy',\n",
|
||||||
|
" 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy',\n",
|
||||||
|
" 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy',\n",
|
||||||
|
" 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy',\n",
|
||||||
|
" 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy',\n",
|
||||||
|
" 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy',\n",
|
||||||
|
" 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy',\n",
|
||||||
|
" 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy',\n",
|
||||||
|
" 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy',\n",
|
||||||
|
" 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy',\n",
|
||||||
|
" 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy',\n",
|
||||||
|
" 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy',\n",
|
||||||
|
" 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy',\n",
|
||||||
|
" 'Healthy', 'Healthy', 'Healthy', 'Healthy', 'Healthy'], dtype='<U7'),\n",
|
||||||
|
" 'type': 'scatter',\n",
|
||||||
|
" 'uid': 'ec50e077-27ac-4b13-a822-39473ba1c517',\n",
|
||||||
|
" 'x': array([0. , 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1 , 0.11,\n",
|
||||||
|
" 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18, 0.19, 0.2 , 0.21, 0.22, 0.23,\n",
|
||||||
|
" 0.24, 0.25, 0.26, 0.27, 0.28, 0.29, 0.3 , 0.31, 0.32, 0.33, 0.34, 0.35,\n",
|
||||||
|
" 0.36, 0.37, 0.38, 0.39, 0.4 , 0.41, 0.42, 0.43, 0.44, 0.45, 0.46, 0.47,\n",
|
||||||
|
" 0.48, 0.49, 0.5 , 0.51, 0.52, 0.53, 0.54, 0.55, 0.56, 0.57, 0.58, 0.59,\n",
|
||||||
|
" 0.6 , 0.61, 0.62, 0.63, 0.64, 0.65, 0.66, 0.67, 0.68, 0.69, 0.7 , 0.71,\n",
|
||||||
|
" 0.72, 0.73, 0.74, 0.75, 0.76, 0.77, 0.78, 0.79, 0.8 , 0.81, 0.82, 0.83,\n",
|
||||||
|
" 0.84, 0.85, 0.86, 0.87, 0.88, 0.89, 0.9 , 0.91, 0.92, 0.93, 0.94, 0.95,\n",
|
||||||
|
" 0.96, 0.97, 0.98, 0.99, 1. ]),\n",
|
||||||
|
" 'y': array([1. , 1. , 1. , 1. , 1. , 1. ,\n",
|
||||||
|
" 1. , 1. , 1. , 1. , 1. , 0.85350318,\n",
|
||||||
|
" 0.85350318, 0.85350318, 0.85350318, 0.85350318, 0.85350318, 0.85350318,\n",
|
||||||
|
" 0.85350318, 0.85350318, 0.85350318, 0.83529412, 0.82162162, 0.82142857,\n",
|
||||||
|
" 0.82142857, 0.8195122 , 0.81696429, 0.81696429, 0.81304348, 0.80526316,\n",
|
||||||
|
" 0.80526316, 0.80526316, 0.80526316, 0.80526316, 0.80526316, 0.80526316,\n",
|
||||||
|
" 0.80526316, 0.80526316, 0.80526316, 0.80526316, 0.80526316, 0.80526316,\n",
|
||||||
|
" 0.80526316, 0.80526316, 0.80526316, 0.80526316, 0.80526316, 0.79746835,\n",
|
||||||
|
" 0.79301746, 0.79075426, 0.78117647, 0.77181208, 0.77181208, 0.76685934,\n",
|
||||||
|
" 0.76685934, 0.76685934, 0.76685934, 0.76685934, 0.76685934, 0.76685934,\n",
|
||||||
|
" 0.76685934, 0.76208178, 0.76146789, 0.75631769, 0.75167785, 0.75167785,\n",
|
||||||
|
" 0.75167785, 0.75167785, 0. , 0. , 0. , 0. ,\n",
|
||||||
|
" 0. , 0. , 0. , 0. , 0. , 0. ,\n",
|
||||||
|
" 0. , 0. , 0. , 0. , 0. , 0. ,\n",
|
||||||
|
" 0. , 0. , 0. , 0. , 0. , 0. ,\n",
|
||||||
|
" 0. , 0. , 0. , 0. , 0. , 0. ,\n",
|
||||||
|
" 0. , 0. , 0. , 0. , 0. ])},\n",
|
||||||
|
" {'customdata': array([99., 99., 99., 99., 99., 99., 99., 99., 99., 99., 99., 99., 98., 98.,\n",
|
||||||
|
" 98., 98., 98., 98., 97., 97., 97., 96., 96., 96., 95., 95., 95., 94.,\n",
|
||||||
|
" 93., 92., 92., 91., 90., 89., 89., 88., 88., 87., 86., 86., 85., 84.,\n",
|
||||||
|
" 83., 82., 81., 80., 79., 77., 76., 75., 75., 74., 73., 72., 71., 70.,\n",
|
||||||
|
" 69., 69., 68., 66., 64., 63., 61., 60., 58., 56., 54., 53., 51., 0.,\n",
|
||||||
|
" 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
|
||||||
|
" 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
|
||||||
|
" 0., 0., 0.]),\n",
|
||||||
|
" 'hovertemplate': ('<b>class: %{text}</b><br>recal' ... 'customdata:.3f}<extra></extra>'),\n",
|
||||||
|
" 'line': {'color': '#DC3912'},\n",
|
||||||
|
" 'mode': 'lines',\n",
|
||||||
|
" 'name': 'Stressed (AP = 0.532)',\n",
|
||||||
|
" 'text': array(['Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed',\n",
|
||||||
|
" 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed',\n",
|
||||||
|
" 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed',\n",
|
||||||
|
" 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed',\n",
|
||||||
|
" 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed',\n",
|
||||||
|
" 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed',\n",
|
||||||
|
" 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed',\n",
|
||||||
|
" 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed',\n",
|
||||||
|
" 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed',\n",
|
||||||
|
" 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed',\n",
|
||||||
|
" 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed',\n",
|
||||||
|
" 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed',\n",
|
||||||
|
" 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed',\n",
|
||||||
|
" 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed',\n",
|
||||||
|
" 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed',\n",
|
||||||
|
" 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed',\n",
|
||||||
|
" 'Stressed', 'Stressed', 'Stressed', 'Stressed', 'Stressed'], dtype='<U8'),\n",
|
||||||
|
" 'type': 'scatter',\n",
|
||||||
|
" 'uid': '0b87ffb2-6c52-4502-83fa-8b42f346bc4a',\n",
|
||||||
|
" 'x': array([0. , 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1 , 0.11,\n",
|
||||||
|
" 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18, 0.19, 0.2 , 0.21, 0.22, 0.23,\n",
|
||||||
|
" 0.24, 0.25, 0.26, 0.27, 0.28, 0.29, 0.3 , 0.31, 0.32, 0.33, 0.34, 0.35,\n",
|
||||||
|
" 0.36, 0.37, 0.38, 0.39, 0.4 , 0.41, 0.42, 0.43, 0.44, 0.45, 0.46, 0.47,\n",
|
||||||
|
" 0.48, 0.49, 0.5 , 0.51, 0.52, 0.53, 0.54, 0.55, 0.56, 0.57, 0.58, 0.59,\n",
|
||||||
|
" 0.6 , 0.61, 0.62, 0.63, 0.64, 0.65, 0.66, 0.67, 0.68, 0.69, 0.7 , 0.71,\n",
|
||||||
|
" 0.72, 0.73, 0.74, 0.75, 0.76, 0.77, 0.78, 0.79, 0.8 , 0.81, 0.82, 0.83,\n",
|
||||||
|
" 0.84, 0.85, 0.86, 0.87, 0.88, 0.89, 0.9 , 0.91, 0.92, 0.93, 0.94, 0.95,\n",
|
||||||
|
" 0.96, 0.97, 0.98, 0.99, 1. ]),\n",
|
||||||
|
" 'y': array([1. , 1. , 1. , 1. , 1. , 1. ,\n",
|
||||||
|
" 1. , 1. , 1. , 1. , 1. , 1. ,\n",
|
||||||
|
" 0.89361702, 0.89361702, 0.89361702, 0.89361702, 0.89361702, 0.89361702,\n",
|
||||||
|
" 0.83333333, 0.83333333, 0.83333333, 0.8 , 0.8 , 0.8 ,\n",
|
||||||
|
" 0.79503106, 0.79503106, 0.79503106, 0.77714286, 0.77222222, 0.75510204,\n",
|
||||||
|
" 0.75510204, 0.74519231, 0.72850679, 0.72222222, 0.72222222, 0.71641791,\n",
|
||||||
|
" 0.71641791, 0.71641791, 0.71641791, 0.71641791, 0.71326165, 0.70877193,\n",
|
||||||
|
" 0.70748299, 0.70627063, 0.70418006, 0.69811321, 0.68484848, 0.68144044,\n",
|
||||||
|
" 0.68144044, 0.68144044, 0.68144044, 0.67938931, 0.67938931, 0.67938931,\n",
|
||||||
|
" 0.67938931, 0.6778043 , 0.6778043 , 0.6778043 , 0.6778043 , 0.66590389,\n",
|
||||||
|
" 0.66444444, 0.66444444, 0.65450644, 0.64876033, 0.64876033, 0.64788732,\n",
|
||||||
|
" 0.63547758, 0.63249516, 0.625 , 0. , 0. , 0. ,\n",
|
||||||
|
" 0. , 0. , 0. , 0. , 0. , 0. ,\n",
|
||||||
|
" 0. , 0. , 0. , 0. , 0. , 0. ,\n",
|
||||||
|
" 0. , 0. , 0. , 0. , 0. , 0. ,\n",
|
||||||
|
" 0. , 0. , 0. , 0. , 0. , 0. ,\n",
|
||||||
|
" 0. , 0. , 0. , 0. , 0. ])}],\n",
|
||||||
|
" 'layout': {'margin': {'b': 0, 'l': 0, 'r': 0, 't': 30},\n",
|
||||||
|
" 'shapes': [{'line': {'dash': 'dash'}, 'type': 'line', 'x0': 0, 'x1': 1, 'y0': 1, 'y1': 0}],\n",
|
||||||
|
" 'template': '...',\n",
|
||||||
|
" 'xaxis': {'constrain': 'domain', 'range': [0, 1], 'title': {'text': 'Recall'}},\n",
|
||||||
|
" 'yaxis': {'constrain': 'domain',\n",
|
||||||
|
" 'range': [0, 1],\n",
|
||||||
|
" 'scaleanchor': 'x',\n",
|
||||||
|
" 'scaleratio': 1,\n",
|
||||||
|
" 'title': {'text': 'Precision'}}}\n",
|
||||||
|
"})"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "display_data"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"# Print a classification report for all classes\n",
|
||||||
|
"results.print_report()\n",
|
||||||
|
"\n",
|
||||||
|
"print(results.mAP())\n",
|
||||||
|
"\n",
|
||||||
|
"# Plot confusion matrix\n",
|
||||||
|
"matrix = results.plot_confusion_matrix(classes=classes)\n",
|
||||||
|
"matrix.show()\n",
|
||||||
|
"\n",
|
||||||
|
"pr_curves = results.plot_pr_curves(classes=classes, iou_thresh=0.95)\n",
|
||||||
|
"pr_curves.show()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "markdown",
|
||||||
|
"id": "3871c398",
|
||||||
|
"metadata": {},
|
||||||
|
"source": [
|
||||||
|
"### View dataset in fiftyone <a name=\"modelfiftyonesession\"></a>\n",
|
||||||
|
"\n",
|
||||||
|
"We can launch a fiftyone session in a new tab to explore the dataset and the results."
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 7,
|
||||||
|
"id": "bfb39b5d",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [
|
||||||
|
{
|
||||||
|
"name": "stdout",
|
||||||
|
"output_type": "stream",
|
||||||
|
"text": [
|
||||||
|
"Session launched. Run `session.show()` to open the App in a cell output.\n"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"data": {
|
||||||
|
"application/javascript": [
|
||||||
|
"window.open('http://localhost:5151/');"
|
||||||
|
],
|
||||||
|
"text/plain": [
|
||||||
|
"<IPython.core.display.Javascript object>"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"metadata": {},
|
||||||
|
"output_type": "display_data"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"source": [
|
||||||
|
"session = fo.launch_app(dataset, auto=False)\n",
|
||||||
|
"session.view = dataset.view()\n",
|
||||||
|
"session.plots.attach(matrix)\n",
|
||||||
|
"session.open_tab()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": 8,
|
||||||
|
"id": "e1d00573",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": [
|
||||||
|
"session.close()"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"cell_type": "code",
|
||||||
|
"execution_count": null,
|
||||||
|
"id": "53a67321",
|
||||||
|
"metadata": {},
|
||||||
|
"outputs": [],
|
||||||
|
"source": []
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"metadata": {
|
||||||
|
"kernelspec": {
|
||||||
|
"display_name": "Python 3 (ipykernel)",
|
||||||
|
"language": "python",
|
||||||
|
"name": "python3"
|
||||||
|
},
|
||||||
|
"language_info": {
|
||||||
|
"codemirror_mode": {
|
||||||
|
"name": "ipython",
|
||||||
|
"version": 3
|
||||||
|
},
|
||||||
|
"file_extension": ".py",
|
||||||
|
"mimetype": "text/x-python",
|
||||||
|
"name": "python",
|
||||||
|
"nbconvert_exporter": "python",
|
||||||
|
"pygments_lexer": "ipython3",
|
||||||
|
"version": "3.7.15"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nbformat": 4,
|
||||||
|
"nbformat_minor": 5
|
||||||
|
}
|
||||||
@ -7,14 +7,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Table of contents\n",
|
"# Table of contents\n",
|
||||||
"1. [Introduction](#introduction)\n",
|
"1. [Introduction](#introduction)\n",
|
||||||
"2. [Aggregate Model Evaluation](#modelevaluation)\n",
|
"2. [YOLO Evaluation](#yoloevaluation)\n",
|
||||||
" 1. [Loading the dataset](#modeload)\n",
|
|
||||||
" 2. [Perform detections](#modeldetect)\n",
|
|
||||||
" 3. [Save detections](#modeldetectionssave)\n",
|
|
||||||
" 4. [Evaluate detections](#modeldetectionseval)\n",
|
|
||||||
" 5. [Calculate results and plot them](#modelshowresults)\n",
|
|
||||||
" 6. [View dataset in fiftyone](#modelfiftyonesession)\n",
|
|
||||||
"3. [YOLO Evaluation](#yoloevaluation)\n",
|
|
||||||
" 1. [Load OIDv6](#yololoadoid)\n",
|
" 1. [Load OIDv6](#yololoadoid)\n",
|
||||||
" 2. [Merge labels into one](#yolomergelabels)\n",
|
" 2. [Merge labels into one](#yolomergelabels)\n",
|
||||||
" 3. [Load YOLOv5 dataset](#yololoadv5)\n",
|
" 3. [Load YOLOv5 dataset](#yololoadv5)\n",
|
||||||
@ -31,7 +24,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"## Introduction <a name=\"introduction\"></a>\n",
|
"## Introduction <a name=\"introduction\"></a>\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This notebook loads the test dataset in YOLOv5 format from disk and evaluates the model's performance."
|
"This notebook loads the test dataset in YOLOv5 format from disk and evaluates the object detection model's performance."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -47,228 +40,6 @@
|
|||||||
"from detection import detect_yolo_only"
|
"from detection import detect_yolo_only"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "bafcbf96",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"## Aggregate Model Evaluation <a name=\"modelevaluation\"></a>\n",
|
|
||||||
"\n",
|
|
||||||
"First, load the dataset from the directory containing the images and the labels in YOLOv5 format.\n",
|
|
||||||
"\n",
|
|
||||||
"### Loading the dataset <a name=\"modeload\"></a>"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "6343aa55",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"name = \"dataset\"\n",
|
|
||||||
"dataset_dir = \"dataset\"\n",
|
|
||||||
"\n",
|
|
||||||
"# The splits to load\n",
|
|
||||||
"splits = [\"val\"]\n",
|
|
||||||
"\n",
|
|
||||||
"# Load the dataset, using tags to mark the samples in each split\n",
|
|
||||||
"dataset = fo.Dataset(name)\n",
|
|
||||||
"for split in splits:\n",
|
|
||||||
" dataset.add_dir(\n",
|
|
||||||
" dataset_dir=dataset_dir,\n",
|
|
||||||
" dataset_type=fo.types.YOLOv5Dataset,\n",
|
|
||||||
" split=split,\n",
|
|
||||||
" tags=split,\n",
|
|
||||||
" )\n",
|
|
||||||
"\n",
|
|
||||||
"classes = dataset.default_classes"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "073ce554",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"If the dataset already exists because it had been saved under the same name before, load the dataset from fiftyone's folder."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "8681fc92",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"dataset = fo.load_dataset('dataset')\n",
|
|
||||||
"classes = dataset.default_classes"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "ab97bece",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Perform detections <a name=\"modeldetect\"></a>\n",
|
|
||||||
"\n",
|
|
||||||
"Now we can call the aggregate model to do detections on the images contained in the dataset. The actual detection happens at line 6 where `detect()` is called. This function currently does inference using the GPU via `onnxruntime-gpu`. All detections are saved to the `predictions` keyword of each sample. A sample is one image with potentially multiple detections."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "29827e3f",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# Do detections with model and save bounding boxes\n",
|
|
||||||
"with fo.ProgressBar() as pb:\n",
|
|
||||||
" for sample in pb(dataset.view()):\n",
|
|
||||||
" image = Image.open(sample.filepath)\n",
|
|
||||||
" w, h = image.size\n",
|
|
||||||
" pred = detect(sample.filepath, '../weights/yolo.onnx', '../weights/resnet.onnx')\n",
|
|
||||||
"\n",
|
|
||||||
" detections = []\n",
|
|
||||||
" for _, row in pred.iterrows():\n",
|
|
||||||
" xmin, xmax = int(row['xmin']), int(row['xmax'])\n",
|
|
||||||
" ymin, ymax = int(row['ymin']), int(row['ymax'])\n",
|
|
||||||
" rel_box = [\n",
|
|
||||||
" xmin / w, ymin / h, (xmax - xmin) / w, (ymax - ymin) / h\n",
|
|
||||||
" ]\n",
|
|
||||||
" detections.append(\n",
|
|
||||||
" fo.Detection(label=classes[int(row['cls'])],\n",
|
|
||||||
" bounding_box=rel_box,\n",
|
|
||||||
" confidence=int(row['cls_conf'])))\n",
|
|
||||||
"\n",
|
|
||||||
" sample[\"predictions\"] = fo.Detections(detections=detections)\n",
|
|
||||||
" sample.save()"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "39ce167e",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Save detections <a name=\"modeldetectionssave\"></a>\n",
|
|
||||||
"\n",
|
|
||||||
"We have to make sure that the predictions for each sample are saved within the dataset. That is why we must call `dataset.save()` manually. The `persistent` flag is set again just to make sure."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "06e1b4c0",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"dataset.persistent = True\n",
|
|
||||||
"dataset.save()"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "0c9f9304",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Evaluate detections against ground truth <a name=\"modeldetectionseval\"></a>\n",
|
|
||||||
"\n",
|
|
||||||
"Having saved the predictions, we can now evaluate them by cross-checking with the ground truth labels. If we specify an `eval_key`, true positives, false positives and false negatives will be saved under that key."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "8ad67806",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"results = dataset.view().evaluate_detections(\n",
|
|
||||||
" \"predictions\",\n",
|
|
||||||
" gt_field=\"ground_truth\",\n",
|
|
||||||
" eval_key=\"eval\",\n",
|
|
||||||
" compute_mAP=True,\n",
|
|
||||||
")"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "9e403f93",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### Calculate results and plot them <a name=\"modelshowresults\"></a>\n",
|
|
||||||
"\n",
|
|
||||||
"Now we have the performance of the model saved in the `results` variable and can extract various metrics from that. Here we print a simple report of all classes and their precision and recall values as well as the mAP with the metric employed by [COCO](https://cocodataset.org/#detection-eval). Next, a confusion matrix is plotted for each class (in our case only one). Finally, we can show the precision vs. recall curve for a specified threshold value."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "b180420b",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# Print a classification report for all classes\n",
|
|
||||||
"results.print_report()\n",
|
|
||||||
"\n",
|
|
||||||
"print(results.mAP())\n",
|
|
||||||
"\n",
|
|
||||||
"# Plot confusion matrix\n",
|
|
||||||
"matrix = results.plot_confusion_matrix(classes=classes)\n",
|
|
||||||
"matrix.show()\n",
|
|
||||||
"\n",
|
|
||||||
"pr_curves = results.plot_pr_curves(classes=classes, iou_thresh=0.95)\n",
|
|
||||||
"pr_curves.show()"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"id": "2d48bb3f",
|
|
||||||
"metadata": {},
|
|
||||||
"source": [
|
|
||||||
"### View dataset in fiftyone <a name=\"modelfiftyonesession\"></a>\n",
|
|
||||||
"\n",
|
|
||||||
"We can launch a fiftyone session in a new tab to explore the dataset and the results."
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "d1137788",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"session = fo.launch_app(dataset, auto=False)\n",
|
|
||||||
"session.view = dataset.view()\n",
|
|
||||||
"session.plots.attach(matrix)\n",
|
|
||||||
"session.open_tab()"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "code",
|
|
||||||
"execution_count": null,
|
|
||||||
"id": "6eed9a86",
|
|
||||||
"metadata": {},
|
|
||||||
"outputs": [],
|
|
||||||
"source": [
|
|
||||||
"# Write final dataset to disk\n",
|
|
||||||
"def export_dataset(dataset, export_dir, classes=[\"Healthy\", \"Stressed\"]):\n",
|
|
||||||
" label_field = \"ground_truth\"\n",
|
|
||||||
"\n",
|
|
||||||
" # The splits to export\n",
|
|
||||||
" splits = [\"val\"]\n",
|
|
||||||
"\n",
|
|
||||||
" # Export the splits\n",
|
|
||||||
" for split in splits:\n",
|
|
||||||
" split_view = dataset.match_tags(split)\n",
|
|
||||||
" split_view.export(\n",
|
|
||||||
" export_dir=export_dir,\n",
|
|
||||||
" dataset_type=fo.types.YOLOv5Dataset,\n",
|
|
||||||
" label_field=label_field,\n",
|
|
||||||
" split=split,\n",
|
|
||||||
" classes=classes,\n",
|
|
||||||
" )"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "22561d30",
|
"id": "22561d30",
|
||||||
@ -533,7 +304,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 5,
|
"execution_count": 5,
|
||||||
"id": "c77a4844",
|
"id": "04de8608",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
@ -547,7 +318,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"id": "ea22d143",
|
"id": "78144a0a",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"The code for the LaTeX table of the classification report can be printed by first converting the results to a pandas DataFrame and then calling the `to_latex()` method of the DataFrame. This code can then be inserted into the LaTeX document."
|
"The code for the LaTeX table of the classification report can be printed by first converting the results to a pandas DataFrame and then calling the `to_latex()` method of the DataFrame. This code can then be inserted into the LaTeX document."
|
||||||
@ -587,7 +358,7 @@
|
|||||||
{
|
{
|
||||||
"cell_type": "code",
|
"cell_type": "code",
|
||||||
"execution_count": 14,
|
"execution_count": 14,
|
||||||
"id": "6d857d9d",
|
"id": "5cbf690d",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [
|
"outputs": [
|
||||||
{
|
{
|
||||||
@ -669,7 +440,9 @@
|
|||||||
"id": "751f3d2b",
|
"id": "751f3d2b",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": []
|
"source": [
|
||||||
|
"session.close()"
|
||||||
|
]
|
||||||
}
|
}
|
||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
620
classification/evaluation/eval-train-yolo.ipynb
Normal file
620
classification/evaluation/eval-train-yolo.ipynb
Normal file
File diff suppressed because one or more lines are too long
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Loading…
x
Reference in New Issue
Block a user