Change em-dash to ---

This commit is contained in:
Tobias Eidelpes 2022-01-05 15:13:32 +01:00
parent b07dc3d676
commit f5cf8fcd4e

View File

@ -50,14 +50,14 @@ environment~\cite{russellArtificialIntelligenceModern2021}.
While the possibilities of AI are seemingly endless, the public is slowly but
steadily learning about its limitations. These limitations manifest themselves
in areas such as autonomous driving and medicine, for example. These are fields
where AI can have a direct—potentially life-changing—impact on people's lives. A
self-driving car operates on roads where accidents can happen at any time.
Decisions made by the car before, during and after the accident can result in
severe consequences for all participants. In medicine, AIs are increasingly used
to drive human decision-making. The more critical the proper use and functioning
of AI is, the more trust in its architecture and results is required. Trust,
however, is not easily defined, especially in relation to artificial
intelligence.
where AI can have a direct---potentially life-changing---impact on people's
lives. A self-driving car operates on roads where accidents can happen at any
time. Decisions made by the car before, during and after the accident can result
in severe consequences for all participants. In medicine, AIs are increasingly
used to drive human decision-making. The more critical the proper use and
functioning of AI is, the more trust in its architecture and results is
required. Trust, however, is not easily defined, especially in relation to
artificial intelligence.
This work will explore the following question: \emph{Can artificial intelligence
be trustworthy, and if so, how?} To be able to discuss this question, trust has
@ -95,8 +95,8 @@ that the trustee does not violate our \emph{social agreement} by acting against
our interests. Often times we are not able to confirm that the trustee has
indeed done his/her job. Sometimes we will only find out later that what did
happen was not in line with our own interests. Trust is therefore also always a
function of time. Previously entrusted people candepending on their track
recordeither continue to be trusted or lose trust.
function of time. Previously entrusted people can---depending on their track
record---either continue to be trusted or lose trust.
We do not only trust certain people to act on our behalf, we can also place
trust in things rather than people. Every technical device or gadget receives
@ -122,8 +122,8 @@ because of their existing relationship.
Due to the different dimensions of trust and its inherent complexity in
different contexts, frameworks for trust are an active field of research. One
such framework—proposed by \textcite{ferrarioAIWeTrust2020}—will be discussed in
the following sections.
such framework---proposed by \textcite{ferrarioAIWeTrust2020}---will be
discussed in the following sections.
\subsection{Incremental Model of Trust}
@ -243,8 +243,8 @@ the model receives from its beneficiaries. One such example may be Netflix'
movie recommendation system that receives which type of movies certain users are
interested in. A malicious user could therefore attack the recommendation engine
by supplying wrong inputs. \emph{Evasion attacks} consist of alterations which
are made to the training samples in such a way that these alternationswhile
generally invisible to the human eyemislead the algorithm.
are made to the training samples in such a way that these alternations---while
generally invisible to the human eye---mislead the algorithm.
\emph{White-box attacks} allow an attacker to clearly see all parameters and all
functions of a model. \emph{Black-box attackers}, on the other hand, can only
@ -271,11 +271,11 @@ trade-offs is therefore critical for real-world applications.
Non-discrimination and fairness are two important properties of any artificial
intelligence system. If one or both of them are violated, trust in the system
erodes quickly. Often researchers only find out about a system's discriminatory
behavior when the system has been in place for a long time. In other cases—such
as with the chat bot Tay from Microsoft Research, for example—the problems
become immediately apparent once the algorithm is live. Countless other models
have been shown to be biased on multiple fronts: the US' recidivism prediction
software \textsc{COMPAS} is biased against black people
behavior when the system has been in place for a long time. In other
cases---such as with the chat bot Tay from Microsoft Research, for example---the
problems become immediately apparent once the algorithm is live. Countless other
models have been shown to be biased on multiple fronts: the US' recidivism
prediction software \textsc{COMPAS} is biased against black people
\cite{angwinMachineBias2016}, camera software for blink detection is biased
against Asian eyes \cite{roseFaceDetectionCamerasGlitches2010} and gender-based
discrimination in the placement of career advertisements