From 4f58287fe580a8dcd4006c6251571220ea9b58ce Mon Sep 17 00:00:00 2001 From: Tobias Eidelpes Date: Wed, 5 Jan 2022 18:13:14 +0100 Subject: [PATCH] Remove ellipsis --- trustworthy-ai.tex | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/trustworthy-ai.tex b/trustworthy-ai.tex index 860a2cc..813e416 100644 --- a/trustworthy-ai.tex +++ b/trustworthy-ai.tex @@ -367,7 +367,7 @@ outcome, the model is inherently explainable. Examples are decision trees, linear regression models, rule-based models and Bayesian networks. This approach is not possible for neural networks and thus \emph{model-agnostic explanations} have to be found. \textsc{LIME} \cite{ribeiroWhyShouldTrust2016} is a tool to -find such model-agnostic explanations. \textsc{LIME} works \enquote{…by learning +find such model-agnostic explanations. \textsc{LIME} works \enquote{by learning an interpretable model locally around the prediction} \cite[p.~1]{ribeiroWhyShouldTrust2016}. An advantage of this approach is that \textsc{LIME} is useful for any model, regardless of how it is constructed. Due