Add abstract
This commit is contained in:
parent
6927996378
commit
663f1316c8
@ -28,8 +28,21 @@
|
||||
\maketitle
|
||||
|
||||
\begin{abstract}
|
||||
The abstract should briefly summarize the contents of the paper in
|
||||
150--250 words.
|
||||
|
||||
As artifical intelligence (AI) systems have come to permeate almost every aspect
|
||||
of our lives, little attention has been paid to its impacts on our complex
|
||||
societies. From research and reports indicating that AI systems discriminate
|
||||
against minorities and have adverse effects on privacy, their trustworthiness
|
||||
has decreased substantially, especially with the advent of increasingly complex
|
||||
and intransparent models. This fact has given rise to a new research field
|
||||
concerned with increasing an AI's trustworthiness. This work gives an
|
||||
introduction to the state of trustworthiness of AIs and discusses ideas on what
|
||||
trust means in different contexts. Furthermore, approaches to increasing an AI's
|
||||
trustworthiness are discussed from a technical and computational as well as
|
||||
social perspective. The results of this discussion are that a combination of
|
||||
both technical and social approaches yields the biggest benefits in terms of
|
||||
increasing trust in AIs. We close with concluding remarks about further research
|
||||
directions and potentially interesting developments of AI systems.
|
||||
|
||||
\keywords{Artificial Intelligence, Trustworthiness, Social Computing}
|
||||
\end{abstract}
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user