Add application of incremental trust model

This commit is contained in:
Tobias Eidelpes 2021-12-14 16:37:05 +01:00
parent f14ee141b4
commit 2b400be64d

View File

@ -160,15 +160,50 @@ model: trustworthiness. Trustworthiness can be defined as the cognitive belief
of $X$ that $Y$ is trustworthy. Reflective trust involves a cognitive process
which allows a trustor to obtain reasons for trusting a potential trustee. $X$
believes in the trustworthiness of $Y$ because there are reasons for $Y$ being
trustworthy. Similarly to simple trust, reflective trust is still missing the
aspect of control. Reflective trust does not have to be expressed in binary
form but can also be expressed by a subjective measure of confidence. The more
likely a trustee $Y$ is to perform action $A$ towards a goal $G$, the higher
$X$'s confidence in $Y$ is. Additionally, $X$ might have high reflective trust
in $Y$ but still does not trust $Y$ to perform a given task because of other,
potentially unconscious, reasons.
trustworthy. Contrary to simple trust, reflective trust includes the aspect of
control. For an agent $X$ to \emph{reflectively} trust another agent $Y$, $X$
has objective reasons to trust $Y$ but is not willing to do so without control.
Reflective trust does not have to be expressed in binary form but can also be
expressed by a subjective measure of confidence. The more likely a trustee $Y$
is to perform action $A$ towards a goal $G$, the higher $X$'s confidence in $Y$
is. Additionally, $X$ might have high reflective trust in $Y$ but still does not
trust $Y$ to perform a given task because of other, potentially unconscious,
reasons.
\subsubsection{Pragmatic Trust}
\subsubsection{Pragmatic Trust} is the last form of trust in the incremental
model proposed by \cite{ferrario_ai_2020}. In addition to having objective
reasons to trust $Y$, $X$ is also willing to do so without control. It is thus a
combination of simple trust and reflective trust. Simple trust provides the
non-cognitive, non-controlling aspect of trust and reflective trust provides the
cognitive aspect.
\subsection{Application of the Model}
Since the incremental model of trust can be applied to human-human as well as
human-AI interactions, an example which draws from both domains will be
presented. The setting is that of a company which ships tailor-made machine
learning (ML) solutions to other firms. On the human-human interaction side
there are multiple teams working on different aspects of the software. The
hierarchical structure between bosses, their team leaders and their developers
is composed of different forms of trust. A boss has worked with a specific team
leader in the past and thus knows from experience that the team leader can be
trusted without control (paradigmatic trust). The team leader has had this
particular team for a number of projects already but has recently hired a new
junior developer. The team leader has some objective proof that the new hire is
capable of delivering good work on time due to impressive credentials but needs
more time to be able to trust the new colleague without control (reflective
trust).
On the human-AI side, one developer is working on a machine learning algorithm
to achieve a specific goal $G$. Taking the 5-tuple from the incremental model,
$X$ is the developer, $Y$ is the machine learning algorithm and $A$ is the
action the machine learning algorithm takes to achieve the goal $G$. In the
beginning, $X$ does not yet trust $Y$ to do its job properly. This is due to an
absence of any past performance metric of the algorithm to achieve $G$. While
most, if not all, parameters of $Y$ have to be controlled by $X$ in the
beginning, there is less and less control needed if $Y$ achieves $G$
consistently. This also increases the cognitive trust in $Y$ as time goes on due
to accurate performance metrics.
\section{Taxonomy for Trustworthy AI}
\label{sec:taxonomy}