Crockett, Keeley ORCID: https://orcid.org/0000-0003-1941-6201, Goltz, S, Garratt, M and Latham, Annabel ORCID: https://orcid.org/0000-0002-8410-7950 (2019) Trust in Computational Intelligence Systems: A Case Study in Public Perceptions. In: IEEE Congress on Evolutionary Computation, 10 June 2019 - 13 June 2019, Wellington, New Zealand.
|
Accepted Version
Available under License In Copyright. Download (3MB) | Preview |
Abstract
The public debate and discussion about trust in Computational Intelligence (CI) systems is not new, but a topic that has seen a recent rise. This is mainly due to the explosion of technological innovations that have been brought to the attention of the public, from lab to reality usually through media reporting. This growth in the public attention was further compounded by the 2018 GDPR legislation and new laws regarding the right to explainable systems, such as the use of “accurate data”, “clear logic” and the “use of appropriate mathematical and statistical procedures for profiling”. Therefore, trust is not just a topic for debate – it must be addressed from the onset, through the selection of fundamental machine learning processes that are used to create models embedded within autonomous decision-making systems, to the selection of training, validation and testing data. This paper presents current work on trust in the field of Computational Intelligence systems and discusses the legal framework we should ascribe to trust in CI systems. A case study examining current public perceptions of recent CI inspired technologies which took part at a national science festival is presented with some surprising results. Finally, we look at current research underway that is aiming to increase trust in Computational Intelligent systems and we identify a clear educational gap.
Impact and Reach
Statistics
Additional statistics for this dataset are available via IRStats2.