Add corrections based on supervisor comments
This commit is contained in:
parent
3897065eaa
commit
b07dc3d676
@ -93,20 +93,20 @@ when we need medical advice. Trusting in these contexts means to cede control
|
||||
over a particular aspect of our lives to someone else. We do so in expectation
|
||||
that the trustee does not violate our \emph{social agreement} by acting against
|
||||
our interests. Often times we are not able to confirm that the trustee has
|
||||
indeed done his/her job. Sometimes we will only find out later that what was
|
||||
in fact done did not happen in line with our own interests. Trust is therefore
|
||||
also always a function of time. Previously entrusted people can—depending on
|
||||
their track record—either continue to be trusted or lose trust.
|
||||
indeed done his/her job. Sometimes we will only find out later that what did
|
||||
happen was not in line with our own interests. Trust is therefore also always a
|
||||
function of time. Previously entrusted people can—depending on their track
|
||||
record—either continue to be trusted or lose trust.
|
||||
|
||||
We do not only trust certain people to act on our behalf, we can also place
|
||||
trust in things rather than people. Every technical device or gadget receives
|
||||
our trust to some extent, because we expect it to do the things we expect it to
|
||||
do. This relationship encompasses \emph{dumb} devices such as vacuum cleaners
|
||||
and refrigerators, as well as seemingly \emph{intelligent} systems such as
|
||||
algorithms performing medical diagnoses. Artificial intelligence systems belong
|
||||
to the latter category when they are functioning well, but can easily slip into
|
||||
the former in the case of a poorly trained machine learning algorithm that
|
||||
simply classifies pictures of dogs and cats always as dogs, for example.
|
||||
and refrigerators, as well as \emph{intelligent} systems such as algorithms
|
||||
performing medical diagnoses. Artificial intelligence systems belong to the
|
||||
latter category when they are functioning well, but can easily slip into the
|
||||
former in the case of a poorly trained machine learning algorithm that simply
|
||||
classifies pictures of dogs and cats always as dogs, for example.
|
||||
|
||||
Scholars usually divide trust either into \emph{cognitive} or
|
||||
\emph{non-cognitive} forms. While cognitive trust involves some sort of rational
|
||||
@ -114,7 +114,7 @@ and objective evaluation of the trustee's capabilities, non-cognitive trust
|
||||
lacks such an evaluation. For instance, if a patient comes to a doctor with a
|
||||
health problem which resides in the doctor's domain, the patient will place
|
||||
trust in the doctor because of the doctor's experience, track record and
|
||||
education. The patient thus consciously decides that he/she would rather trust
|
||||
education. The patient, thus consciously, decides that he/she would rather trust
|
||||
the doctor to solve the problem and not a friend who does not have any
|
||||
expertise. Conversely, non-cognitive trust allows humans to place trust in
|
||||
people they know well, without a need for rational justification, but just
|
||||
@ -298,7 +298,7 @@ made by the model architects, productive bias quickly turns into \emph{erroneous
|
||||
bias}. The last category of bias is \emph{discriminatory bias} and is of
|
||||
particular relevance when designing artificial intelligence systems.
|
||||
|
||||
Fairness, on the other hand, is \enquote{…the absence of any prejudice or
|
||||
Fairness, on the other hand, is \enquote{the absence of any prejudice or
|
||||
favoritism towards an individual or a group based on their inherent or acquired
|
||||
characteristics} \cite[p.~2]{mehrabiSurveyBiasFairness2021}. Fairness in the
|
||||
context of artificial intelligence thus means that the system treats groups or
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user