Add cognitive and non-cognitive trust
This commit is contained in:
parent
fea362696d
commit
eefe4ae212
@ -106,7 +106,43 @@ to the latter category when they are functioning well, but can easily slip into
|
||||
the former in the case of a poorly trained machine learning algorithm that
|
||||
simply classifies pictures of dogs and cats always as dogs, for example.
|
||||
|
||||
\textcite{ferrario_ai_2020}
|
||||
Scholars usually divide trust either into \emph{cognitive} or
|
||||
\emph{non-cognitive} forms. While cognitive trust involves some sort of rational
|
||||
and objective evaluation of the trustee's capabilities, non-cognitive trust
|
||||
lacks such an evaluation. For instance, if a patient comes to a doctor with a
|
||||
health problem which resides in the doctor's domain, the patient will place
|
||||
trust in the doctor because of the doctor's experience, track record and
|
||||
education. The patient thus consciously decides that he/she would rather trust
|
||||
the doctor to solve the problem and not a friend who does not have any
|
||||
expertise. Conversely, non-cognitive trust allows humans to place trust in
|
||||
people they know well, without a need for rational justification, but just
|
||||
because of their existing relationship.
|
||||
|
||||
Due to the different dimensions of trust and its inherent complexity in
|
||||
different contexts, frameworks for trust are an active field of research. One
|
||||
such framework—proposed by \textcite{ferrario_ai_2020}—will be discussed in
|
||||
the following sections.
|
||||
|
||||
\subsection{Incremental Model of Trust}
|
||||
|
||||
The framework by \textcite{ferrario_ai_2020} consists of three types of trust:
|
||||
simple trust, reflective trust and paradigmatic trust. Their model thus consists
|
||||
of the triple
|
||||
|
||||
\[ T = \langle\text{simple trust}, \text{reflective trust}, \text{paradigmatic
|
||||
trust}\rangle \]
|
||||
|
||||
\noindent and a 5-tuple
|
||||
|
||||
\[ \langle X, Y, A, G, C\rangle \]
|
||||
|
||||
\noindent where $X$ and $Y$ denote interacting agents and $A$ the action to be
|
||||
performed by the agent $Y$ to achieve goal $G$. $C$ stands for the context in
|
||||
which the action takes place.
|
||||
|
||||
\subsubsection{Simple Trust}
|
||||
|
||||
|
||||
|
||||
\section{Taxonomy for Trustworthy AI}
|
||||
\label{sec:taxonomy}
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user