\documentclass[runningheads]{llncs} \usepackage{graphicx} \usepackage[backend=biber,style=numeric]{biblatex} \usepackage{hyperref} \hypersetup{ colorlinks=true, linkcolor=black, urlcolor=blue, citecolor=black } \addbibresource{trustworthy-ai.bib} \begin{document} \title{Trustworthy Artificial Intelligence} \author{Tobias Eidelpes} \authorrunning{T. Eidelpes} \institute{Technische Universität Wien, Karlsplatz 13, 1040 Wien, Austria \email{e1527193@student.tuwien.ac.at}} \maketitle \begin{abstract} The abstract should briefly summarize the contents of the paper in 150--250 words. \keywords{Artificial Intelligence, Trustworthiness, Social Computing} \end{abstract} \section{Introduction} \label{sec:introduction} The use of artificial intelligence (AI) in computing has seen an unprecedented rise over the last few years. From humble beginnings as a tool to aid humans in decision making to advanced use cases where human interaction is avoided as much as possible, AI has transformed the way we live our lives today. The transformative capabilities of AI are not just felt in the area of computer science, but have bled into a diverse set of other disciplines such as biology, chemistry, mathematics and economics. For the purposes of this work, AIs are machines that can learn, take decision autonomously and interact with the environment~\cite{russell_artificial_2021}. While the possibilities of AI are seemingly endless, the public is slowly but steadily learning about its limitations. These limitations manifest themselves in areas such as autonomous driving and medicine, for example. These are fields where AI can have a direct—potentially life-changing—impact on people's lives. A self-driving car operates on roads where accidents can happen at any time. Decisions made by the car before, during and after the accident can result in severe consequences for all participants. In medicine, AIs are increasingly used to drive human decision-making. The more critical the proper use and functioning of AI is, the more trust in its architecture and results is required. Trust, however, is not easily defined, especially in relation to artificial intelligence. This work will explore the following question: \emph{Can artificial intelligence be trustworthy, and if so, how?} To be able to discuss this question, trust has to be defined and dissected into its constituent components. Chapter~\ref{sec:modeling-trust} analyzes trust and molds the gained insights into a framework suitable for interactions between humans and artifical intelligence. Chapter~\ref{sec:taxonomy} approaches trustworthiness in artificial intelligence from a computing perspective. There are various ways to make AIs more \emph{trustworthy} through the use of technical means. This chapter seeks to discuss and summarize important methods and approaches. Chapter~\ref{sec:social-computing} discusses combining humans and artificial intelligence into one coherent system which is capable of achieving more than either of its parts on their own. \section{Trust} \label{sec:modeling-trust} In order to be able to define the requirements and goals of \emph{trustworthy AI}, it is important to know what trust is and how we humans establish trust with someone or something. This section therefore defines and explores different forms of trust. \subsection{Defining Trust} Commonly, \emph{trusting someone} means to have confidence in another person's ability to do certain things. This can mean that we trust someone to speak the truth to us or that a person is competently doing the things that we \emph{entrust} them to do. We trust the person delivering the mail that they do so on time and without mail getting lost on the way to our doors. We trust people knowledgeable in a certain field such as medicine to be able to advise us when we need medical advice. Trusting in these contexts means to cede control over a particular aspect of our lives to someone else. We do so in expectation that the trustee does not violate our \emph{social agreement} by acting against our interests. Often times we are not able to confirm that the trustee has indeed done his/her job. Sometimes we will only find out later that what was in fact done did not happen in line with our own interests. Trust is therefore also always a function of time. Previously entrusted people can—depending on their track record—either continue to be trusted or lose trust. We do not only trust certain people to act on our behalf, we can also place trust in things rather than people. Every technical device or gadget receives our trust to some extent, because we expect it to do the things we expect it to do. This relationship encompasses \emph{dumb} devices such as vacuum cleaners and refrigerators, as well as seemingly \emph{intelligent} systems such as algorithms performing medical diagnoses. Artificial intelligence systems belong to the latter category when they are functioning well, but can easily slip into the former in the case of a poorly trained machine learning algorithm that simply classifies pictures of dogs and cats always as dogs, for example. \textcite{ferrario_ai_2020} \section{Taxonomy for Trustworthy AI} \label{sec:taxonomy} \section{Social Computing} \label{sec:social-computing} \section{Conclusion} \label{sec:conclusion} \printbibliography \end{document}